Search results for: slice thickness accuracy
1158 Limits of the Dot Counting Test: A Culturally Responsive Approach to Neuropsychological Evaluations and Treatment
Authors: Erin Curtis, Avraham Schwiger
Abstract:
Neuropsychological testing and evaluation is a crucial step in providing patients with effective diagnoses and treatment while in clinical care. The variety of batteries used in these evaluations can help clinicians better understand the nuanced declivities in a patient’s cognitive, behavioral, or emotional functioning, consequently equipping clinicians with the insights to make intentional choices about a patient’s care. Despite the knowledge these batteries can yield, some aspects of neuropsychological testing remain largely inaccessible to certain patient groups as a result of fundamental cultural, educational, or social differences. One such battery includes the Dot Counting Test (DCT), during which patients are required to count a series of dots on a page as rapidly and accurately as possible. As the battery progresses, the dots appear in clusters that are designed to be easily multiplied. This task evaluates a patient’s cognitive functioning, attention, and level of effort exerted on the evaluation as a whole. However, there is evidence to suggest that certain social groups, particularly Latinx groups, may perform worse on this task as a result of cultural or educational differences, not reduced cognitive functioning or effort. As such, this battery fails to account for baseline differences among patient groups, thus creating questions surrounding the accuracy, generalizability, and value of its results. Accessibility and cultural sensitivity are critical considerations in the testing and treatment of marginalized groups, yet have been largely ignored in the literature and in clinical settings to date. Implications and improvements to applications are discussed.Keywords: culture, latino, neuropsychological assessment, neuropsychology, accessibility
Procedia PDF Downloads 1151157 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 2561156 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 1291155 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects
Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed
Abstract:
Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis
Procedia PDF Downloads 3781154 Investigate the Effect and the Main Influencing Factors of the Accelerated Reader Programme on Chinese Primary School Students’ Reading Achievement
Authors: Fujia Yang
Abstract:
Alongside technological innovation, the current “double reduction” policy and English Curriculum Standards for Compulsory Education in China both emphasise and encourage appropriately integrating educational technologies into the classroom. Therefore, schools are increasingly using digital means to engage students in English reading, but the impact of such technologies on Chinese pupils’ reading achievement remains unclear. To serve as a reference for reforming English reading education in primary schools under the double reduction policy, this study investigates the effects and primary influencing factors of a specific reading programme, Accelerated Reader (AR), on Chinese primary school students’ reading achievement. A quantitative online survey was used to collect 37 valid questionnaires from teachers, and the results demonstrate that, from teachers’ perspectives, the AR program seemed to positively affect students’ reading achievement by recommending material at the appropriate reading levels and developing students’ reading habits. Although the reading enjoyment derived from the AR program does not directly influence students’ reading achievement, these factors are strongly correlated. This can be explained by the self-paced, independent learning AR format, its high accuracy in predicting reading level, the quiz format and external motivation, and the importance of examinations and resource limitations in China. The results of this study may support reforming English reading education in Chinese primary schools.Keywords: educational technology, reading programme, primary students, accelerated reader, reading effects
Procedia PDF Downloads 881153 Artificial Intelligence Based Abnormality Detection System and Real Valuᵀᴹ Product Design
Authors: Junbeom Lee, Jaehyuck Cho, Wookyeong Jeong, Jonghan Won, Jungmin Hwang, Youngseok Song, Taikyeong Jeong
Abstract:
This paper investigates and analyzes meta-learning technologies that use multiple-cameras to monitor and check abnormal behavior in people in real-time in the area of healthcare fields. Advances in artificial intelligence and computer vision technologies have confirmed that cameras can be useful for individual health monitoring and abnormal behavior detection. Through this, it is possible to establish a system that can respond early by automatically detecting abnormal behavior of the elderly, such as patients and the elderly. In this paper, we use a technique called meta-learning to analyze image data collected from cameras and develop a commercial product to determine abnormal behavior. Meta-learning applies machine learning algorithms to help systems learn and adapt quickly to new real data. Through this, the accuracy and reliability of the abnormal behavior discrimination system can be improved. In addition, this study proposes a meta-learning-based abnormal behavior detection system that includes steps such as data collection and preprocessing, feature extraction and selection, and classification model development. Various healthcare scenarios and experiments analyze the performance of the proposed system and demonstrate excellence compared to other existing methods. Through this study, we present the possibility that camera-based meta-learning technology can be useful for monitoring and testing abnormal behavior in the healthcare area.Keywords: artificial intelligence, abnormal behavior, early detection, health monitoring
Procedia PDF Downloads 921152 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 1431151 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 3101150 Iris Cancer Detection System Using Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera
Procedia PDF Downloads 5071149 Influence of La0.1Sr0.9Co1-xFexO3-δ Catalysts on Oxygen Permeation Using Mixed Conductor
Authors: Y. Muto, S. Araki, H. Yamamoto
Abstract:
The separation of oxygen is one key technology to improve the efficiency and to reduce the cost for the processed of the partial oxidation of the methane and the condensation of the carbon dioxide. Particularly, carbon dioxide at high concentration would be obtained by the combustion using pure oxygen separated from air. However, the oxygen separation process occupied the large part of energy consumption. Therefore, it is considered that the membrane technologies enable to separation at lower cost and lower energy consumption than conventional methods. In this study, it is examined that the separation of oxygen using membranes of mixed conductors. Oxygen permeation through the membrane is occurred by the following three processes. At first, the oxygen molecules dissociate into oxygen ion at feed side of the membrane, subsequently, oxygen ions diffuse in the membrane. Finally, oxygen ions recombine to form the oxygen molecule. Therefore, it is expected that the membrane of thickness and material, or catalysts of the dissociation and recombination affect the membrane performance. However, there is little article about catalysts for the dissociation and recombination. We confirmed the performance of La0.6Sr0.4Co1.0O3-δ (LSC) based catalyst which was commonly used as the dissociation and recombination. It is known that the adsorbed amount of oxygen increase with the increase of doped Fe content in B site of LSC. We prepared the catalysts of La0.1Sr0.9Co0.9Fe0.1O3-δ(C9F1), La0.1Sr0.9Co0.5Fe0.5O3-δ(C5F5) and La0.1Sr0.9Co0.3Fe0.7O3-δ(C7F3). Also, we used Pr2NiO4 type mixed conductor as a membrane material. (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+δ(PLNCG) shows the high oxygen permeability and the stability against carbon dioxide. Oxygen permeation experiments were carried out using a homemade apparatus at 850 -975 °C. The membrane was sealed with Pyrex glass at both end of the outside dense alumina tubes. To measure the oxygen permeation rate, air was fed to the film side at 50 ml min-1, helium as the sweep gas and reference gas was fed at 20 ml min-1. The flow rates of the sweep gas and the gas permeated through the membrane were measured using flow meter and the gas concentrations were determined using a gas chromatograph. Then, the permeance of the oxygen was determined using the flow rate and the concentration of the gas on the permeate side of the membrane. The increase of oxygen permeation was observed with increasing temperature. It is considered that this is due to the catalytic activities are increased with increasing temperature. Another reason is the increase of oxygen diffusivity in the bulk of membrane. The oxygen permeation rate is improved by using catalyst of LSC or LSCF. The oxygen permeation rate of membrane with LSCF showed higher than that of membrane with LSC. Furthermore, in LSCF catalysts, oxygen permeation rate increased with the increase of the doped amount of Fe. It is considered that this is caused by the increased of adsorbed amount of oxygen.Keywords: membrane separation, oxygen permeation, K2NiF4-type structure, mixed conductor
Procedia PDF Downloads 5221148 Dow Polyols near Infrared Chemometric Model Reduction Based on Clustering: Reducing Thirty Global Hydroxyl Number (OH) Models to Less Than Five
Authors: Wendy Flory, Kazi Czarnecki, Matthijs Mercy, Mark Joswiak, Mary Beth Seasholtz
Abstract:
Polyurethane Materials are present in a wide range of industrial segments such as Furniture, Building and Construction, Composites, Automotive, Electronics, and more. Dow is one of the leaders for the manufacture of the two main raw materials, Isocyanates and Polyols used to produce polyurethane products. Dow is also a key player for the manufacture of Polyurethane Systems/Formulations designed for targeted applications. In 1990, the first analytical chemometric models were developed and deployed for use in the Dow QC labs of the polyols business for the quantification of OH, water, cloud point, and viscosity. Over the years many models have been added; there are now over 140 models for quantification and hundreds for product identification, too many to be reasonable for support. There are 29 global models alone for the quantification of OH across > 70 products at many sites. An attempt was made to consolidate these into a single model. While the consolidated model proved good statistics across the entire range of OH, several products had a bias by ASTM E1655 with individual product validation. This project summary will show the strategy for global model updates for OH, to reduce the number of models for quantification from over 140 to 5 or less using chemometric methods. In order to gain an understanding of the best product groupings, we identify clusters by reducing spectra to a few dimensions via Principal Component Analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP). Results from these cluster analyses and a separate validation set allowed dow to reduce the number of models for predicting OH from 29 to 3 without loss of accuracy.Keywords: hydroxyl, global model, model maintenance, near infrared, polyol
Procedia PDF Downloads 1391147 IoT-Based Early Identification of Guava (Psidium guajava) Leaves and Fruits Diseases
Authors: Daudi S. Simbeye, Mbazingwa E. Mkiramweni
Abstract:
Plant diseases have the potential to drastically diminish the quantity and quality of agricultural products. Guava (Psidium guajava), sometimes known as the apple of the tropics, is one of the most widely cultivated fruits in tropical regions. Monitoring plant health and diagnosing illnesses is an essential matter for sustainable agriculture, requiring the inspection of visually evident patterns on plant leaves and fruits. Due to minor variations in the symptoms of various guava illnesses, a professional opinion is required for disease diagnosis. Due to improper pesticide application by farmers, erroneous diagnoses may result in economic losses. This study proposes a method that uses artificial intelligence (AI) to detect and classify the most widespread guava plant by comparing images of its leaves and fruits to datasets. ESP32 CAM is responsible for data collection, which includes images of guava leaves and fruits. By comparing the datasets, these image formats are used as datasets to help in the diagnosis of plant diseases through the leaves and fruits, which is vital for the development of an effective automated agricultural system. The system test yielded the most accurate identification findings (99 percent accuracy in differentiating four guava fruit diseases (Canker, Mummification, Dot, and Rust) from healthy fruit). The proposed model has been interfaced with a mobile application to be used by smartphones to make a quick and responsible judgment, which can help the farmers instantly detect and prevent future production losses by enabling them to take precautions beforehand.Keywords: early identification, guava plants, fruit diseases, deep learning
Procedia PDF Downloads 801146 AI-Powered Models for Real-Time Fraud Detection in Financial Transactions to Improve Financial Security
Authors: Shanshan Zhu, Mohammad Nasim
Abstract:
Financial fraud continues to be a major threat to financial institutions across the world, causing colossal money losses and undermining public trust. Fraud prevention techniques, based on hard rules, have become ineffective due to evolving patterns of fraud in recent times. Against such a background, the present study probes into distinct methodologies that exploit emergent AI-driven techniques to further strengthen fraud detection. We would like to compare the performance of generative adversarial networks and graph neural networks with other popular techniques, like gradient boosting, random forests, and neural networks. To this end, we would recommend integrating all these state-of-the-art models into one robust, flexible, and smart system for real-time anomaly and fraud detection. To overcome the challenge, we designed synthetic data and then conducted pattern recognition and unsupervised and supervised learning analyses on the transaction data to identify which activities were fishy. With the use of actual financial statistics, we compare the performance of our model in accuracy, speed, and adaptability versus conventional models. The results of this study illustrate a strong signal and need to integrate state-of-the-art, AI-driven fraud detection solutions into frameworks that are highly relevant to the financial domain. It alerts one to the great urgency that banks and related financial institutions must rapidly implement these most advanced technologies to continue to have a high level of security.Keywords: AI-driven fraud detection, financial security, machine learning, anomaly detection, real-time fraud detection
Procedia PDF Downloads 511145 System Identification of Timber Masonry Walls Using Shaking Table Test
Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi
Abstract:
Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition
Procedia PDF Downloads 2691144 Finite Element Analysis of Cold Formed Steel Screwed Connections
Authors: Jikhil Joseph, S. R. Satish Kumar
Abstract:
Steel Structures are commonly used for rapid erections and multistory constructions due to its inherent advantages. However, the high accuracy required in detailing and heavier sections, make it difficult to erect in place and transport. Cold Formed steel which are specially made by reducing carbon and other alloys are used nowadays to make thin-walled structures. Various types of connections are being reported as well as practiced for the thin-walled members such as bolting, riveting, welding and other mechanical connections. Commonly self-drilling screw connections are used for cold-formed purlin sheeting connection. In this paper an attempt is made to develop a moment resting frame which can be rapidly and remotely constructed with thin walled sections and self-drilling screws. Semi-rigid Moment connections are developed with Rectangular thin-walled tubes and the screws. The Finite Element Analysis programme ABAQUS is used for modelling the screwed connections. The various modelling procedures for simulating the connection behavior such as tie-constraint model, oriented spring model and solid interaction modelling are compared and are critically reviewed. From the experimental validations the solid-interaction modelling identified to be the most accurate one and are used for predicting the connection behaviors. From the finite element analysis, hysteresis curves and the modes of failure were identified. Parametric studies were done on the connection model to optimize the connection configurations to get desired connection characteristics.Keywords: buckling, cold formed steel, finite element analysis, screwed connections
Procedia PDF Downloads 1911143 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 1011142 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning
Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene
Abstract:
This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.Keywords: limit pressure of soil, xgboost, random forest, bearing capacity
Procedia PDF Downloads 301141 Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)
Authors: Abdelkader Hachemi & Boualem Remini
Abstract:
Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.Keywords: seawater, intrusion, sharp interface, Algeria
Procedia PDF Downloads 801140 Algebraic Coupled Level Set-Volume of Fluid Method with Capillary Pressure Treatment for Surface Tension Dominant Two-Phase Flows
Authors: Majid Haghshenas, James Wilson, Ranganathan Kumar
Abstract:
In this study, an Algebraic Coupled Level Set-Volume of Fluid (A-CLSVOF) method with capillary pressure treatment is proposed for the modeling of two-phase capillary flows. The Volume of Fluid (VOF) method is utilized to incorporate one-way coupling with the Level Set (LS) function in order to further improve the accuracy of the interface curvature calculation and resulting surface tension force. The capillary pressure is determined and treated independently of the hydrodynamic pressure in the momentum balance in order to maintain consistency between cell centered and interpolated values, resulting in a reduction in parasitic currents. In this method, both VOF and LS functions are transported where the new volume fraction determines the interface seed position used to reinitialize the LS field. The Hamilton-Godunov function is used with a second order (in space and time) discretization scheme to produce a signed distance function. The performance of the current methodology has been tested against some common test cases in order to assess the reduction in non-physical velocities and improvements in the interfacial pressure jump. The cases of a static drop, non-linear Rayleigh-Taylor instability and finally a droplets impact on a liquid pool were simulated to compare the performance of the present method to other well-known methods in the area of parasitic current reduction, interface location evolution and overall agreement with experimental results.Keywords: two-phase flow, capillary flow, surface tension force, coupled LS with VOF
Procedia PDF Downloads 3581139 Vibration Analysis and Optimization Design of Ultrasonic Horn
Authors: Kuen Ming Shu, Ren Kai Ho
Abstract:
Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration
Procedia PDF Downloads 1191138 Preparation and Modeling Carbon Nanofibers as an Adsorbent to Protect the Environment
Authors: Maryam Ziaei, Saeedeh Rafiei, Leila Mivehi, Akbar Khodaparast Haghi
Abstract:
Carbon nanofibers possess properties that are rarely present in any other types of carbon adsorbents, including a small cross-sectional area, combined with a multitude of slit shaped nanopores that are suitable for adsorption of certain types of molecules. Because of their unique properties these materials can be used for the selective adsorption of organic molecules. On the other hand, activated carbon fiber (ACF) has been widely applied as an effective adsorbent for micro-pollutants in recent years. ACF effectively adsorbs and removes a full spectrum of harmful substances. Although there are various methods of fabricating carbon nanofibres, electrospinning is perhaps the most versatile procedure. This technique has been given great attention in current decades because of the nearly simple, comfortable and low cost. Spinning process control and achieve optimal conditions is important in order to effect on its physical properties, absorbency and versatility with different industrial purposes. Modeling and simulation are suitable methods to obtain this approach. In this paper, activated carbon nanofibers were produced during electrospinning of polyacrylonitrile solution. Stabilization, carbonization and activation of electrospun nanofibers in optimized conditions were achieved, and mathematical modelling of electrosinning process done by focusing on governing equations of electrified fluid jet motion (using FeniCS software). Experimental and theoretical results will be compared with each other in order to estimate the accuracy of the model. The simulation can provide the possibility of predicting essential parameters, which affect the electrospinning process.Keywords: carbon nanofibers, electrospinning, electrospinning modeling, simulation
Procedia PDF Downloads 2911137 Modelling Phase Transformations in Zircaloy-4 Fuel Cladding under Transient Heating Rates
Authors: Jefri Draup, Antoine Ambard, Chi-Toan Nguyen
Abstract:
Zirconium alloys exhibit solid-state phase transformations under thermal loading. These can lead to a significant evolution of the microstructure and associated mechanical properties of materials used in nuclear fuel cladding structures. Therefore, the ability to capture effects of phase transformation on the material constitutive behavior is of interest during conditions of severe transient thermal loading. Whilst typical Avrami, or Johnson-Mehl-Avrami-Kolmogorov (JMAK), type models for phase transformations have been shown to have a good correlation with the behavior of Zircaloy-4 under constant heating rates, the effects of variable and fast heating rates are not fully explored. The present study utilises the results of in-situ high energy synchrotron X-ray diffraction (SXRD) measurements in order to validate the phase transformation models for Zircaloy-4 under fast variable heating rates. These models are used to assess the performance of fuel cladding structures under loss of coolant accident (LOCA) scenarios. The results indicate that simple Avrami type models can provide a reasonable indication of the phase distribution in experimental test specimens under variable fast thermal loading. However, the accuracy of these models deteriorates under the faster heating regimes, i.e., 100Cs⁻¹. The studies highlight areas for improvement of simple Avrami type models, such as the inclusion of temperature rate dependence of the JMAK n-exponent.Keywords: accident, fuel, modelling, zirconium
Procedia PDF Downloads 1441136 A Power Management System for Indoor Micro-Drones in GPS-Denied Environments
Authors: Yendo Hu, Xu-Yu Wu, Dylan Oh
Abstract:
GPS-Denied drones open the possibility of indoor applications, including dynamic arial surveillance, inspection, safety enforcement, and discovery. Indoor swarming further enhances these applications in accuracy, robustness, operational time, and coverage. For micro-drones, power management becomes a critical issue, given the battery payload restriction. This paper proposes an application enabling battery replacement solution that extends the micro-drone active phase without human intervention. First, a framework to quantify the effectiveness of a power management solution for a drone fleet is proposed. The operation-to-non-operation ratio, ONR, gives one a quantitative benchmark to measure the effectiveness of a power management solution. Second, a survey was carried out to evaluate the ONR performance for the various solutions. Third, through analysis, this paper proposes a solution tailored to the indoor micro-drone, suitable for swarming applications. The proposed automated battery replacement solution, along with a modified micro-drone architecture, was implemented along with the associated micro-drone. Fourth, the system was tested and compared with the various solutions within the industry. Results show that the proposed solution achieves an ONR value of 31, which is a 1-fold improvement of the best alternative option. The cost analysis shows a manufacturing cost of $25, which makes this approach viable for cost-sensitive markets (e.g., consumer). Further challenges remain in the area of drone design for automated battery replacement, landing pad/drone production, high-precision landing control, and ONR improvements.Keywords: micro-drone, battery swap, battery replacement, battery recharge, landing pad, power management
Procedia PDF Downloads 1281135 Focus-Latent Dirichlet Allocation for Aspect-Level Opinion Mining
Authors: Mohsen Farhadloo, Majid Farhadloo
Abstract:
Aspect-level opinion mining that aims at discovering aspects (aspect identification) and their corresponding ratings (sentiment identification) from customer reviews have increasingly attracted attention of researchers and practitioners as it provides valuable insights about products/services from customer's points of view. Instead of addressing aspect identification and sentiment identification in two separate steps, it is possible to simultaneously identify both aspects and sentiments. In recent years many graphical models based on Latent Dirichlet Allocation (LDA) have been proposed to solve both aspect and sentiment identifications in a single step. Although LDA models have been effective tools for the statistical analysis of document collections, they also have shortcomings in addressing some unique characteristics of opinion mining. Our goal in this paper is to address one of the limitations of topic models to date; that is, they fail to directly model the associations among topics. Indeed in many text corpora, it is natural to expect that subsets of the latent topics have higher probabilities. We propose a probabilistic graphical model called focus-LDA, to better capture the associations among topics when applied to aspect-level opinion mining. Our experiments on real-life data sets demonstrate the improved effectiveness of the focus-LDA model in terms of the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the focus-LDA topic model provides a natural way of visualizing and exploring unstructured collection of textual data.Keywords: aspect-level opinion mining, document modeling, Latent Dirichlet Allocation, LDA, sentiment analysis
Procedia PDF Downloads 971134 Modeling of Large Elasto-Plastic Deformations by the Coupled FE-EFGM
Authors: Azher Jameel, Ghulam Ashraf Harmain
Abstract:
In the recent years, the enriched techniques like the extended finite element method, the element free Galerkin method, and the Coupled finite element-element free Galerkin method have found wide application in modeling different types of discontinuities produced by cracks, contact surfaces, and bi-material interfaces. The extended finite element method faces severe mesh distortion issues while modeling large deformation problems. The element free Galerkin method does not have mesh distortion issues, but it is computationally more demanding than the finite element method. The coupled FE-EFGM proves to be an efficient numerical tool for modeling large deformation problems as it exploits the advantages of both FEM and EFGM. The present paper employs the coupled FE-EFGM to model large elastoplastic deformations in bi-material engineering components. The large deformation occurring in the domain has been modeled by using the total Lagrangian approach. The non-linear elastoplastic behavior of the material has been represented by the Ramberg-Osgood model. The elastic predictor-plastic corrector algorithms are used for the evaluation stresses during large deformation. Finally, several numerical problems are solved by the coupled FE-EFGM to illustrate its applicability, efficiency and accuracy in modeling large elastoplastic deformations in bi-material samples. The results obtained by the proposed technique are compared with the results obtained by XFEM and EFGM. A remarkable agreement was observed between the results obtained by the three techniques.Keywords: XFEM, EFGM, coupled FE-EFGM, level sets, large deformation
Procedia PDF Downloads 4531133 The Impact of Intercultural Communicative Competence on the Academic Achievement of English Language Learners: Students Working in the Sector of Tourism in Jordan (Petra and Jerash) as a Case Study
Authors: Haneen Alrawashdeh, Naciye Kunt
Abstract:
Intercultural communicative competence or (ICC), is an extension of communicative competence that takes into account the intercultural aspect of learning a foreign language. Accordingly, this study aimed at investigating the intercultural interaction impact on English as a foreign language learners' academic achievement of language as a scholastic subject and their motivation towards learning it. To achieve the aim of the study, a qualitative research approach was implemented by means of semi-structured interviews. Interview sessions were conducted with eight teachers of English as well as ten English language learners who work in the tourism industry in a variety of career paths, such as selling antiques and traditional costumes. An analysis of learners' grades of English subjects from 2014 to 2019 academic years was performed by using the Open Education Management Information System Database in Jordan to support the findings of the study. The results illustrated that due to the fact that they work in the tourism sector, students gain skills and knowledge that assist them in better academic achievement in the subject of English by practicing intercultural communication with different nationalities on a daily basis; intercultural communication enhances students speaking skills, lexicon, and fluency; however, despite that their grades showed increasing, from teachers perspectives, intercultural communicative competence reduces their linguistic accuracy and ability to perform English academic writing in academic contexts such as exams.Keywords: intercultural communicative competence, Jordan, language learning motivation, language academic achievement
Procedia PDF Downloads 2141132 Formulation and Characterization of Active Edible Films from Cassava Starch for Snacks and Savories
Authors: P. Raajeswari, S. M. Devatha, S. Yuvajanani, U. Rashika
Abstract:
Edible food packaging are the need of the hour to save life on land and under water by eliminating waste cycle and replacing Single Use Plastics at grass root level as it can be eaten or composted as such. Cassava (Manihot esculenta) selected for making edible films are rich source of starch, and also it exhibit good sheeting propertiesdue to the high amylose: amylopectin content. Cassava starch was extracted by manual method at a laboratory scale and yielded 65 per cent. Edible films were developed by adding food grade plasticizers and water. Glycerol showed good plasticizing property as compared to sorbitol and polylactic acid in both manual (petri dish) and machine (film making machine) production. The thickness of the film is 0.25±0.03 mm. Essential oil and components from peels like pomegranate, orange, pumpkin, onion, and banana brat, and herbs like tulsi and country borage was extracted through the standardized aqueous and alkaline method. In the standardized film, the essential oil and components from selected peel and herbs were added to the casting solution separately and casted the film. It was added to improve the anti-oxidant, anti-microbial and optical properties. By inclusion of extracts, it reduced the bubble formation while casting. FTIR, Water Vapor and Oxygen Transmission Rate (WVTR and OTR), tensile strength, microbial load, shelf life, and degradability of the films were done to analyse the mechanical property of the standardized films. FTIR showed the presence of essential oil. WVTR and OTR of the film was improved after inclusion of essential oil and extracts from 1.312 to 0.811 cm₃/m₂ and 15.12 to 17.81 g/ m₂.d. Inclusion of essential oil from herbs showed better WVTR and OTR than the inclusion of peel extract and standard. Tensile strength and Elongation at break has not changed by essential oil and extracts at 0.86 ± 0.12 mpa and 14 ± 2 at 85 N force. By inclusion of extracts, an optical property of the film enhanced, and it increases the appearance of the packaging material. The films were completely degraded on 84thdays and partially soluble in water. Inclusion of essential oil does not have impact on degradability and solubility. The microbial loads of the active films were decreased from 15 cfu/gm to 7 cfu/gm. The films can be stored at frozen state for 24 days and 48 days at atmospheric temperature when packed with South Indian snacks and savories.Keywords: active films, cassava starch, plasticizer, characterization
Procedia PDF Downloads 851131 Studying the Temperature Field of Hypersonic Vehicle Structure with Aero-Thermo-Elasticity Deformation
Authors: Geng Xiangren, Liu Lei, Gui Ye-Wei, Tang Wei, Wang An-ling
Abstract:
The malfunction of thermal protection system (TPS) caused by aerodynamic heating is a latent trouble to aircraft structure safety. Accurately predicting the structure temperature field is quite important for the TPS design of hypersonic vehicle. Since Thornton’s work in 1988, the coupled method of aerodynamic heating and heat transfer has developed rapidly. However, little attention has been paid to the influence of structural deformation on aerodynamic heating and structural temperature field. In the flight, especially the long-endurance flight, the structural deformation, caused by the aerodynamic heating and temperature rise, has a direct impact on the aerodynamic heating and structural temperature field. Thus, the coupled interaction cannot be neglected. In this paper, based on the method of static aero-thermo-elasticity, considering the influence of aero-thermo-elasticity deformation, the aerodynamic heating and heat transfer coupled results of hypersonic vehicle wing model were calculated. The results show that, for the low-curvature region, such as fuselage or center-section wing, structure deformation has little effect on temperature field. However, for the stagnation region with high curvature, the coupled effect is not negligible. Thus, it is quite important for the structure temperature prediction to take into account the effect of elastic deformation. This work has laid a solid foundation for improving the prediction accuracy of the temperature distribution of aircraft structures and the evaluation capacity of structural performance.Keywords: aerothermoelasticity, elastic deformation, structural temperature, multi-field coupling
Procedia PDF Downloads 3421130 Genetic Dissection of QTLs in Intraspecific Hybrids Derived from Muskmelon (Cucumis Melo L.) and Mangalore Melon (Cucumis Melo Var Acidulus) for Shelflife and Fruit Quality Traits
Authors: Virupakshi Hiremata, Ratnakar M. Shet, Raghavendra Gunnaiah, Prashantha A.
Abstract:
Muskmelon is a health-beneficial and refreshing dessert vegetable with a low shelf life. Mangalore melon, a genetic homeologue of muskmelon, has a shelf life of more than six months and is mostly used for culinary purposes. Understanding the genetics of shelf life, yield and yield-related traits and identification of markers linked to such traits is helpful in transfer of extended shelf life from Mangalore melon to the muskmelon through intra-specific hybridization. For QTL mapping, 276 F2 mapping population derived from the cross Arka Siri × SS-17 was genotyped with 40 polymorphic markers distributed across 12 chromosomes. The same population was also phenotyped for yield, shelf life and fruit quality traits. One major QTL (R2 >10) and fourteen minor QTLs (R2 <10) localized on four linkage groups, governing different traits were mapped in F2 mapping population developed from the intraspecific cross with a LOD > 5.5. The phenotypic varience explained by each locus varied from 3.63 to 10.97 %. One QTL was linked to shelf-life (qSHL-3-1), five QTLs were linked to TSS (qTSS-1-1, qTSS-3-3, qTSS-3-1, qTSS-3-2 and qTSS-1-2), two QTLs for flesh thickness (qFT-3-1, and qFT-3-2) and seven QTLs for fruit yield per vine (qFYV-3-1, qFYV-1-1, qFYV-3-1, qFYV1-1, qFYV-1-3, qFYV2-1 and qFYV6-1). QTL flanking markers may be used for marker assisted introgression of shelf life into muskmelon. Important QTL will be further fine-mapped for identifying candidate genes by QTLseq and RNAseq analysis. Fine-mapping of Important Quantitative Trait Loci (QTL) holds immense promise in elucidating the genetic basis of complex traits. Leveraging advanced techniques like QTLseq and RNA sequencing (RNA seq) is crucial for this endeavor. QTLseq combines next-generation sequencing with traditional QTL mapping, enabling precise identification of genomic regions associated with traits of interest. Through high-throughput sequencing, QTLseq provides a detailed map of genetic variations linked to phenotypic variations, facilitating targeted investigations. Moreover, RNA seq analysis offers a comprehensive view of gene expression patterns in response to specific traits or conditions. By comparing transcriptomes between contrasting phenotypes, RNA seq aids in pinpointing candidate genes underlying QTL regions. Integrating QTLseq with RNA seq allows for a multi-dimensional approach, coupling genetic variation with gene expression dynamics.Keywords: QTL, shelf life, TSS, muskmelon and Mangalore melon
Procedia PDF Downloads 581129 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in the early diagnosis of the acute heart attacks is obvious. The purpose of this study is to determine how well a predictive model would perform based on the only patient-reportable clinical history factors, without using diagnostic tests or physical exams. This type of the prediction model might have application outside of the hospital setting to give accurate advice to patients to influence them to seek care in appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea, and vomiting were selected as the main features.Keywords: Coronary heart disease, Acute heart attacks, Prediction, Logistic regression
Procedia PDF Downloads 453