Search results for: multimodal fusion classifier
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1042

Search results for: multimodal fusion classifier

682 Emptiness Downlink and Uplink Proposal Using Space-Time Equation Interpretation

Authors: Preecha Yupapin And Somnath

Abstract:

From the emptiness, the vibration induces the fractal, and the strings are formed. From which the first elementary particle groups, known as quarks, were established. The neutrino and electron are created by them. More elementary particles and life are formed by organic and inorganic substances. The universe is constructed, from which the multi-universe has formed in the same way. universe assumes that the intense energy has escaped from the singularity cone from the multi-universes. Initially, the single mass energy is confined, from which it is disturbed by the space-time distortion. It splits into the entangled pair, where the circular motion is established. It will consider one side of the entangled pair, where the fusion energy of the strong coupling force has formed. The growth of the fusion energy has the quantum physic phenomena, where the moving of the particle along the circumference with a speed faster than light. It introduces the wave-particle duality aspect, which will be saturated at the stopping point. It will be re-run again and again without limitation, which can say that the universe has been created and expanded. The Bose-Einstein condensate (BEC) is released through the singularity by the wormhole, which will be condensed to become a mass associated with the Sun's size. It will circulate(orbit) along the Sun. the consideration of the uncertainty principle is applied, from which the breath control is followed by the uncertainty condition ∆p∆x=∆E∆t~ℏ. The flowing in-out air into a body via a nose has applied momentum and energy control respecting the movement and time, in which the target is that the distortion of space-time will have vanished. Finally, the body is clean which can go to the next procedure, where the mind can escape from the body by the speed of light. However, the borderline between contemplation to being an Arahant is a vacuum, which will be explained.

Keywords: space-time, relativity, enlightenment, emptiness

Procedia PDF Downloads 68
681 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 83
680 Improving Biodegradation Behavior of Fabricated WE43 Magnesium Alloy by High-Temperature Oxidation

Authors: Jinge Liu, Shuyuan Min, Bingchuan Liu, Bangzhao Yin, Bo Peng, Peng Wen, Yun Tian

Abstract:

WE43 magnesium alloy can be additively manufactured via laser powder bed fusion (LPBF) for biodegradable applications, but the as-built WE43 exhibits an excessively rapid corrosion rate. High-temperature oxidation (HTO) was performed on the as-built WE43 to improve its biodegradation behavior. A sandwich structure including an oxide layer at the surface, a transition layer in the middle, and the matrix was generated influenced by the oxidation reaction and diffusion of RE atoms when heated at 525 ℃for 8 hours. The oxide layer consisted of Y₂O₃ and Nd₂O₃ oxides with a thickness of 2-3 μm. The transition layer is composed of α-Mg and Y₂O₃ with a thickness of 60-70 μm, while Mg24RE5 could be observed except α-Mg and Y₂O₃. The oxide layer and transition layer appeared to have an effective passivation effect. The as-built WE43 lost 40% weight after the in vitro immersion test for three days and finally broke into debris after seven days of immersion. The high-temperature oxidation samples kept the structural integrity and lost only 6.88 % weight after 28-day immersion. The corrosion rate of HTO samples was significantly controlled, which improved the biocompatibility of the as-built WE43 at the same time. The samples after HTO had better osteogenic capability according to ALP activity. Moreover, as built WE43 performed unqualified in cell adhesion and hemolytic test due to its excessively rapid corrosion rate. While as for HTO samples, cells adhered well, and the hemolysis ratio was only 1.59%.

Keywords: laser powder bed fusion, biodegradable metal, high temperature oxidation, biodegradation behavior, WE43

Procedia PDF Downloads 105
679 Ectopic Mediastinal Parathyroid Adenoma: A Case Report with Diagnostic and Management Challenges

Authors: Augustina Konadu Larbi-Ampofo, Ekemini Umoinwek

Abstract:

Background: Hypercalcaemia is a common electrolyte imbalance that increases mortality if poorly controlled. Primary hyperparathyroidism often presents like this with a prevalence of 0.1-0.3%. Management due to an ectopic parathyroid adenoma in the mediastinum is challenging, especially in a patient with a pacemaker. Case Presentation: A 79-year-old woman with a history of a previous cardiac arrest, permanent pacemaker, ischaemic heart disease, bilateral renal calculi, rectal polyps, liver cirrhosis, and a family history of hyperthyroidism presented to the emergency department with acute back pain. Management and Outcome: The patient was diagnosed with primary hyperparathyroidism due to her elevated corrected calcium and parathyroid hormone levels. Parathyroid investigations consisting of an NM MIBI scan, SPECT-CT, 4D parathyroid scan, and an ultrasound scan of the neck and thorax confirmed an ectopic parathyroid adenoma in the mediastinum at the level of the aortic arch, along with benign thyroid nodules. The location of the adenoma warranted a thoracoscopic surgical approach; however, the presence of her pacemaker and other cardiovascular conditions predisposed her to a potentially poorer post-operative outcome. Discussion: Mediastinal ectopic parathyroid adenomas are rare and difficult to diagnose and treat, often needing a multimodal imaging approach for accurate localisation. Surgery is a definitive treatment; however, in this patient, long-term medical treatment with cinacalcet was the only next suitable treatment option. The difficulty with this is that cinacalcet tackles the biochemical markers of the disease entity and not the disease itself, leaving room for what happens next if there is refractory/uncontrolled hypercalcaemia in this patient with a pacemaker. Moreover, the coexistence of her multiple conditions raises the suspicion of an underlying multisystemic or multiple endocrine disorder, with multiple endocrine neoplasia coming to mind, necessitating further genetic or autoimmune investigations. Conclusion: Mediastinal ectopic parathyroid adenomas are rare, with diagnostic and management challenges.

Keywords: mediastinal ectopic parathyroid adenoma, hyperparathyroidism, SPECT/CT, nuclear medicine, multimodal imaging

Procedia PDF Downloads 23
678 Creation of a Realistic Railway Simulator Developed on a 3D Graphic Game Engine Using a Numerical Computing Programming Environment

Authors: Kshitij Ansingkar, Yohei Hoshino, Liangliang Yang

Abstract:

Advances in algorithms related to autonomous systems have made it possible to research on improving the accuracy of a train’s location. This has the capability of increasing the throughput of a railway network without the need for the creation of additional infrastructure. To develop such a system, the railway industry requires data to test sensor fusion theories or implement simultaneous localization and mapping (SLAM) algorithms. Though such simulation data and ground truth datasets are available for testing automation algorithms of vehicles, however, due to regulations and economic considerations, there is a dearth of such datasets in the railway industry. Thus, there is a need for the creation of a simulation environment that can generate realistic synthetic datasets. This paper proposes (1) to leverage the capabilities of open-source 3D graphic rendering software to create a visualization of the environment. (2) to utilize open-source 3D geospatial data for accurate visualization and (3) to integrate the graphic rendering software with a programming language and numerical computing platform. To develop such an integrated platform, this paper utilizes the computing platform’s advanced sensor models like LIDAR, camera, IMU or GPS and merges it with the 3D rendering of the game engine to generate high-quality synthetic data. Further, these datasets can be used to train Railway models and improve the accuracy of a train’s location.

Keywords: 3D game engine, 3D geospatial data, dataset generation, railway simulator, sensor fusion, SLAM

Procedia PDF Downloads 12
677 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring

Authors: Flavio Cannavo

Abstract:

Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.

Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring

Procedia PDF Downloads 247
676 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 180
675 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 148
674 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis

Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García

Abstract:

Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.

Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis

Procedia PDF Downloads 228
673 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping

Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung

Abstract:

Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.

Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 257
672 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 104
671 Bilingualism: A Case Study of Assamese and Bodo Classifiers

Authors: Samhita Bharadwaj

Abstract:

This is an empirical study of classifiers in Assamese and Bodo, two genetically unrelated languages of India. The objective of the paper is to address the language contact between Assamese and Bodo as reflected in classifiers. The data has been collected through fieldwork in Bodo recording narratives and folk tales and eliciting specific data from the speakers. The data for Assamese is self-produced as native speaker of the language. Assamese is the easternmost New-Indo-Aryan (henceforth NIA) language mainly spoken in the Brahmaputra valley of Assam and some other north-eastern states of India. It is the lingua franca of Assam and is creolised in the neighbouring state of Nagaland. Bodo, on the other hand, is a Tibeto-Burman (henceforth TB) language of the Bodo-Garo group. It has the highest number of speakers among the TB languages of Assam. However, compared to Assamese, it is still a lesser documented language and due to the prestige of Assamese, all the Bodo speakers are fluent bi-lingual in Assamese, though the opposite isn’t the case. With this context, classifiers, a characteristic phenomenon of TB languages, but not so much of NIA languages, presents an interesting case study on language contact caused by bilingualism. Assamese, as a result of its language contact with the TB languages which are rich in classifiers; has developed the richest classifier system among the IA languages in India. Yet, as a part of rampant borrowing of Assamese words and patterns into Bodo; Bodo is seen to borrow even Assamese classifiers into its system. This paper analyses the borrowed classifiers of Bodo and finds the route of this borrowing phenomenon in the number system of the languages. As the Bodo speakers start replacing the higher numbers from five with Assamese ones, they also choose the Assamese classifiers to attach to these numbers. Thus, the partial loss of number in Bodo as a result of language contact and bilingualism in Assamese is found to be the reason behind the borrowing of classifiers in Bodo. The significance of the study lies in exploring an interesting aspect of language contact in Assam. It is hoped that this will attract further research on bilingualism and classifiers in Assam.

Keywords: Assamese, bi-lingual, Bodo, borrowing, classifier, language contact

Procedia PDF Downloads 224
670 Microstructure, Mechanical, Electrical and Thermal Properties of the Al-Si-Ni Ternary Alloy

Authors: Aynur Aker, Hasan Kaya

Abstract:

In recent years, the use of the aluminum based alloys in the industry and technology are increasing. Alloying elements in aluminum have further been improving the strength and stiffness properties that provide superior compared to other metals. In this study, investigation of physical properties (microstructure, microhardness, tensile strength, electrical conductivity and thermal properties) in the Al-12.6wt.%Si-%2wt.Ni ternary alloy were investigated. Al-Si-Ni alloy was prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upwards with different growth rate (V) at constant temperature gradient G (7.73 K/mm). The microstructures (flake spacings, λ), microhardness (HV), ultimate tensile strength, electrical resistivity and thermal properties enthalpy of fusion and specific heat and melting temperature) of the samples were measured. Influence of the growth rate and flake spacings on microhardness, ultimate tensile strength and electrical resistivity were investigated and relationships between them were experimentally obtained by using regression analysis. According to results, λ values decrease with increasing V, but microhardness, ultimate tensile strength, electrical resistivity values increase with increasing V. Variations of electrical resistivity for cast samples with the temperature in the range of 300-1200 K were also measured by using a standard dc four-point probe technique. The enthalpy of fusion and specific heat for the same alloy was also determined by means of differential scanning calorimeter (DSC) from heating trace during the transformation from liquid to solid. The results obtained in this work were compared with the previous similar experimental results obtained for binary and ternary alloys.

Keywords: electrical resistivity, enthalpy, microhardness, solidification, tensile stress

Procedia PDF Downloads 377
669 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification

Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo

Abstract:

For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.

Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle

Procedia PDF Downloads 134
668 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques

Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña

Abstract:

The automatic detection of indigenous languages ​​in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.

Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages

Procedia PDF Downloads 18
667 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 318
666 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 107
665 Multi Universe Existence Based-On Quantum Relativity using DJV Circuit Experiment Interpretation

Authors: Muhammad Arif Jalil, Somchat Sonasang, Preecha Yupapin

Abstract:

This study hypothesizes that the universe is at the center of the universe among the white and black holes, which are the entangled pairs. The coupling between them is in terms of spacetime forming the universe and things. The birth of things is based on exchange energy between the white and black sides. That is, the transition from the white side to the black side is called wave-matter, where it has a speed faster than light with positive gravity. The transition from the black to the white side has a speed faster than light with negative gravity called a wave-particle. In the part where the speed is equal to light, the particle rest mass is formed. Things can appear to take shape here. Thus, the gravity is zero because it is the center. The gravitational force belongs to the Earth itself because it is in a position that is twisted towards the white hole. Therefore, it is negative. The coupling of black-white holes occurs directly on both sides. The mass is formed at the saturation and will create universes and other things. Therefore, it can be hundreds of thousands of universes on both sides of the B and white holes before reaching the saturation point of multi-universes. This work will use the DJV circuit that the research team made as an entangled or two-level system circuit that has been experimentally demonstrated. Therefore, this principle has the possibility for interpretation. This work explains the emergence of multiple universes and can be applied as a practical guideline for searching for universes in the future. Moreover, the results indicate that the DJV circuit can create the elementary particles according to Feynman's diagram with rest mass conditions, which will be discussed for fission and fusion applications.

Keywords: multi-universes, feynman diagram, fission, fusion

Procedia PDF Downloads 63
664 YOLO-IR: Infrared Small Object Detection in High Noise Images

Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long

Abstract:

Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.

Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion

Procedia PDF Downloads 79
663 Impact of Totiviridae L-A dsRNA Virus on Saccharomyces Cerevisiae Host: Transcriptomic and Proteomic Approach

Authors: Juliana Lukša, Bazilė Ravoitytė, Elena Servienė, Saulius Serva

Abstract:

Totiviridae L-A virus is a persistent Saccharomyces cerevisiae dsRNA virus. It encodes the major structural capsid protein Gag and Gag-Pol fusion protein, responsible for virus replication and encapsulation. These features also enable the copying of satellite dsRNAs (called M dsRNAs) encoding a secreted toxin and immunity to it (known as killer toxin). Viral capsid pore presumably functions in nucleotide uptake and viral mRNA release. During cell division, sporogenesis, and cell fusion, the virions remain intracellular and are transferred to daughter cells. By employing high throughput RNA sequencing data analysis, we describe the influence of solely L-A virus on the expression of genes in three different S. cerevisiae hosts. We provide a new perception into Totiviridae L-A virus-related transcriptional regulation, encompassing multiple bioinformatics analyses. Transcriptional responses to L-A infection were similar to those induced upon stress or availability of nutrients. It also delves into the connection between the cell metabolism and L-A virus-conferred demands to the host transcriptome by uncovering host proteins that may be associated with intact virions. To better understand the virus-host interaction, we applied differential proteomic analysis of virus particle-enriched fractions of yeast strains that harboreither complete killer system (L-A-lus and M-2 virus), M-2 depleted orvirus-free. Our analysis resulted in the identification of host proteins, associated with structural proteins of the virus (Gag and Gag-Pol). This research was funded by the European Social Fund under the No.09.3.3-LMT-K-712-19-0157“Development of Competences of Scientists, other Researchers, and Students through Practical Research Activities” measure.

Keywords: totiviridae, killer virus, proteomics, transcriptomics

Procedia PDF Downloads 147
662 Research on Intercity Travel Mode Choice Behavior Considering Traveler’s Heterogeneity and Psychological Latent Variables

Authors: Yue Huang, Hongcheng Gan

Abstract:

The new urbanization pattern has led to a rapid growth in demand for short-distance intercity travel, and the emergence of new travel modes has also increased the variety of intercity travel options. In previous studies on intercity travel mode choice behavior, the impact of functional amenities of travel mode and travelers’ long-term personality characteristics has rarely been considered, and empirical results have typically been calibrated using revealed preference (RP) or stated preference (SP) data. This study designed a questionnaire that combines the RP and SP experiment from the perspective of a trip chain combining inner-city and intercity mobility, with consideration for the actual condition of the Huainan-Hefei traffic corridor. On the basis of RP/SP fusion data, a hybrid choice model considering both random taste heterogeneity and psychological characteristics was established to investigate travelers’ mode choice behavior for traditional train, high-speed rail, intercity bus, private car, and intercity online car-hailing. The findings show that intercity time and cost exert the greatest influence on mode choice, with significant heterogeneity across the population. Although inner-city cost does not demonstrate a significant influence, inner-city time plays an important role. Service attributes of travel mode, such as catering and hygiene services, as well as free wireless network supply, only play a minor role in mode selection. Finally, our study demonstrates that safety-seeking tendency, hedonism, and introversion all have differential and significant effects on intercity travel mode choice.

Keywords: intercity travel mode choice, stated preference survey, hybrid choice model, RP/SP fusion data, psychological latent variable, heterogeneity

Procedia PDF Downloads 111
661 Characterization of Hyaluronic Acid-Based Injections Used on Rejuvenation Skin Treatments

Authors: Lucas Kurth de Azambuja, Loise Silveira da Silva, Gean Vitor Salmoria, Darlan Dallacosta, Carlos Rodrigo de Mello Roesler

Abstract:

This work provides a physicochemical and thermal characterization assessment of three different hyaluronic acid (HA)-based injections used for rejuvenation skin treatments. The three products analyzed are manufactured by the same manufacturer and commercialized for application on different skin levels. According to the manufacturer, all three HA-based injections are crosslinked and have a concentration of 23 mg/mL of HA, and 0.3% of lidocaine. Samples were characterized by Fourier-transformed infrared (FTIR), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and scanning electron microscope (SEM) techniques. FTIR analysis resulted in a similar spectrum when comparing the different products. DSC analysis demonstrated that the fusion points differ in each product, with a higher fusion temperature observed in specimen A, which is used for subcutaneous applications, when compared with B and C, which are used for the middle dermis and deep dermis, respectively. TGA data demonstrated a considerable mass loss at 100°C, which means that the product has more than 50% of water in its composition. TGA analysis also showed that Specimen A had a lower mass loss at 100°C when compared to Specimen C. A mass loss of around 220°C was observed on all samples, characterizing the presence of hyaluronic acid. SEM images displayed a similar structure on all samples analyzed, with a thicker layer for Specimen A when compared with B and C. This series of analyses demonstrated that, as expected, the physicochemical and thermal properties of the products differ according to their application. Furthermore, to better characterize the crosslinking degree of each product and their mechanical properties, a set of different techniques should be applied in parallel to correlate the results and, thereby, relate injection application with material properties.

Keywords: hyaluronic acid, characterization, soft-tissue fillers, injectable gels

Procedia PDF Downloads 89
660 Lateral Retroperitoneal Transpsoas Approach: A Practical Minimal Invasive Surgery Option for Treating Pyogenic Spondylitis of the Lumbar Vertebra

Authors: Sundaresan Soundararajan, Chor Ngee Tan

Abstract:

Introduction: Pyogenic spondylitis, otherwise treated conservatively with long term antibiotics, would require surgical debridement and reconstruction in about 10% to 20% of cases. The classical approach adopted many surgeons have always been anterior approach in ensuring thorough and complete debridement. This, however, comes with high rates of morbidity due to the nature of its access. Direct lateral retroperitoneal approach, which has been growing in usage in degenerative lumbar diseases, has the potential in treating pyogenic spondylitis with its ease of approach and relatively low risk of complications. Aims/Objectives: The objective of this study was to evaluate the effectiveness and clinical outcome of using lateral approach surgery in the surgical management of pyogenic spondylitis of the lumbar spine. Methods: Retrospective chart analysis was done on all patients who presented with pyogenic spondylitis (lumbar discitis/vertebral osteomyelitis) and had undergone direct lateral retroperitoneal lumbar vertebral debridement and posterior instrumentation between 2014 and 2016. Data on blood loss, surgical operating time, surgical complications, clinical outcomes and fusion rates were recorded. Results: A total of 6 patients (3 male and 3 female) underwent this procedure at a single institution by a single surgeon during the defined period. One patient presented with infected implant (PLIF) and vertebral osteomyelitis while the other five presented with single level spondylodiscitis. All patients underwent lumbar debridement, iliac strut grafting and posterior instrumentation (revision of screws for infected PLIF case). The mean operating time was 308.3 mins for all 6 cases. Mean blood loss was reported at 341cc (range from 200cc to 600cc). Presenting symptom of back pain resolved in all 6 cases while 2 cases that presented with lower limb weakness had improvement of neurological deficits. One patient had dislodged strut graft while performing posterior instrumentation and needed graft revision intraoperatively. Infective markers normalized for all patients subsequently. All subjects also showed radiological evidence of fusion on 6 months follow up. Conclusions: Lateral approach in treating pyogenic spondylitis is a viable option as it allows debridement and reconstruction without the risk that comes with other anterior approaches. It allows efficient debridement, short surgical time, moderate blood loss and low risk of vascular injuries. Clinical outcomes and fusion rates by this approach also support its use as practical MIS option surgery for such infection cases.

Keywords: lateral approach, minimally invasive, pyogenic spondylitis, XLIF

Procedia PDF Downloads 177
659 Experimental Study of Hydrogen and Water Vapor Extraction from Helium with Zeolite Membranes for Tritium Processes

Authors: Rodrigo Antunes, Olga Borisevich, David Demange

Abstract:

The Tritium Laboratory Karlsruhe (TLK) has identified zeolite membranes as most promising for tritium processes in the future fusion reactors. Tritium diluted in purge gases or gaseous effluents, and present in both molecular and oxidized forms, can be pre-concentrated by a stage of zeolite membranes followed by a main downstream recovery stage (e.g., catalytic membrane reactor). Since 2011 several membrane zeolite samples have been tested to measure the membrane performances in the separation of hydrogen and water vapor from helium streams. These experiments were carried out in the ZIMT (Zeolite Inorganic Membranes for Tritium) facility where mass spectrometry and cold traps were used to measure the membranes’ performances. The membranes were tested at temperatures ranging from 25 °C up to 130 °C, at feed pressures between 1 and 3 bar, and typical feed flows of 2 l/min. During this experimental campaign, several zeolite-type membranes were studied: a hollow-fiber MFI nanocomposite membrane purchased from IRCELYON (France), and tubular MFI-ZSM5, NaA and H-SOD membranes purchased from Institute for Ceramic Technologies and Systems (IKTS, Germany). Among these membranes, only the MFI-based showed relevant performances for the H2/He separation, with rather high permeances (~0.5 – 0.7 μmol/sm2Pa for H2 at 25 °C for MFI-ZSM5), however with a limited ideal selectivity of around 2 for H2/He regardless of the feed concentration. Both MFI and NaA showed higher separation performances when water vapor was used instead; for example, at 30 °C, the separation factor for MFI-ZSM5 is approximately 10 and 38 for 0.2% and 10% H2O/He, respectively. The H-SOD evidenced to be considerably defective and therefore not considered for further experiments. In this contribution, a comprehensive analysis of the experimental methods and results obtained for the separation performance of different zeolite membranes during the past four years in inactive environment is given. These results are encouraging for the experimental campaign with molecular and oxidized tritium that will follow in 2017.

Keywords: gas separation, nuclear fusion, tritium processes, zeolite membranes

Procedia PDF Downloads 250
658 The Stem Cell Transcription Co-factor Znf521 Sustains Mll-af9 Fusion Protein In Acute Myeloid Leukemias By Altering The Gene Expression Landscape

Authors: Emanuela Chiarella, Annamaria Aloisio, Nisticò Clelia, Maria Mesuraca

Abstract:

ZNF521 is a stem cell-associated transcription co-factor, that plays a crucial role in the homeostatic regulation of the stem cell compartment in the hematopoietic, osteo-adipogenic, and neural system. In normal hematopoiesis, primary human CD34+ hematopoietic stem cells display typically a high expression of ZNF521, while its mRNA levels rapidly decrease when these progenitors progress towards erythroid, granulocytic, or B-lymphoid differentiation. However, most acute myeloid leukemias (AMLs) and leukemia-initiating cells keep high ZNF521 expression. In particular, AMLs are often characterized by chromosomal translocations involving the Mixed Lineage Leukemia (MLL) gene, which MLL gene includes a variety of fusion oncogenes arisen from genes normally required during hematopoietic development; once they are fused, they promote epigenetic and transcription factor dysregulation. The chromosomal translocation t(9;11)(p21-22;q23), fusing the MLL gene with AF9 gene, results in a monocytic immune phenotype with an aggressive course, frequent relapses, and a short survival time. To better understand the dysfunctional transcriptional networks related to genetic aberrations, AML gene expression profile datasets were queried for ZNF521 expression and its correlations with specific gene rearrangements and mutations. The results showed that ZNF521 mRNA levels are associated with specific genetic aberrations: the highest expression levels were observed in AMLs involving t(11q23) MLL rearrangements in two distinct datasets (MILE and den Boer); elevated ZNF521 mRNA expression levels were also revealed in AMLs with t(7;12) or with internal rearrangements of chromosome 16. On the contrary, relatively low ZNF521 expression levels seemed to be associated with the t(8;21) translocation, that in turn is correlated with the AML1-ETO fusion gene or the t(15;17) translocation and in AMLs with FLT3-ITD, NPM1, or CEBPα double mutations. Invitro, we found that the enforced co-expression of ZNF521 in cord blood-derived CD34+ cells induced a significant proliferative advantage, improving MLL-AF9 effects on the induction of proliferation and the expansion of leukemic progenitor cells. Transcriptome profiling of CD34+ cells transduced with either MLL-AF9, ZNF521, or a combination of the two transgenes highlighted specific sets of up- or down-regulated genes that are involved in the leukemic phenotype, including those encoding transcription factors, epigenetic modulators, and cell cycle regulators as well as those engaged in the transport or uptake of nutrients. These data enhance the functional cooperation between ZNF521 and MA9, resulting in the development, maintenance, and clonal expansion of leukemic cells. Finally, silencing of ZNF521 in MLL-AF9-transformed primary CD34+ cells inhibited their proliferation and led to their extinction, as well as ZNF521 silencing in the MLL-AF9+ THP-1 cell line resulted in an impairment of their growth and clonogenicity. Taken together, our data highlight ZNF521 role in the control of self-renewal and in the immature compartment of malignant hematopoiesis, which, by altering the gene expression landscape, contributes to the development and/or maintenance of AML acting in concert with the MLL-AF9 fusion oncogene.

Keywords: AML, human zinc finger protein 521 (hZNF521), mixed lineage leukemia gene (MLL) AF9 (MLLT3 or LTG9), cord blood-derived hematopoietic stem cells (CB-CD34+)

Procedia PDF Downloads 112
657 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy

Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang

Abstract:

In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.

Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties

Procedia PDF Downloads 156
656 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 116
655 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 148
654 Performance Evaluation of GPS/INS Main Integration Approach

Authors: Othman Maklouf, Ahmed Adwaib

Abstract:

This paper introduces a comparative study between the main GPS/INS coupling schemes, this will include the loosely coupled and tightly coupled configurations, several types of situations and operational conditions, in which the data fusion process is done using Kalman filtering. This will include the importance of sensors calibration as well as the alignment of the strap down inertial navigation system. The limitations of the inertial navigation systems are investigated.

Keywords: GPS, INS, Kalman filter, sensor calibration, navigation system

Procedia PDF Downloads 591
653 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 300