Search results for: heatmap visualization techniques
5806 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics
Authors: L. Freeborn
Abstract:
Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.Keywords: neuroimaging studies, research design, second language acquisition, task validity
Procedia PDF Downloads 1415805 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 1085804 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 4205803 Effect of Post Circuit Resistance Exercise Glucose Feeding on Energy and Hormonal Indexes in Plasma and Lymphocyte in Free-Style Wrestlers
Authors: Miesam Golzadeh Gangraj, Younes Parvasi, Mohammad Ghasemi, Ahmad Abdi, Saeid Fazelifar
Abstract:
The purpose of the study was to determine the effect of glucose feeding on energy and hormonal indexes in plasma and lymphocyte immediately after wrestling – base techniques circuit exercise (WBTCE) in young male freestyle wrestlers. Sixteen wrestlers (weight = 75/45 ± 12/92 kg, age = 22/29 ± 0/90 years, BMI = 26/23 ± 2/64 kg/m²) were randomly divided into two groups: control (water), glucose (2 gr per kg body weight). Blood samples were obtained before, immediately, and 90 minutes of the post-exercise recovery period. Glucose (2 g/kg of body weight, 1W/5V) and water (equal volumes) solutions were given immediately after the second blood sampling. Data were analyzed by using an ANOVA (a repeated measure) and a suitable post hoc test (LSD). A significant decrease was observed in lymphocytes glycogen immediately after exercise (P < 0.001). In the experimental group, increase Lymphocyte glycogen concentration (P < 0.028) than in the control group in 90 min post-exercise. Plasma glucose concentrations increased in all groups immediately after exercise (P < 0.05). Plasma insulin concentrations in both groups decreased immediately after exercise, but at 90 min after exercise, its level was significantly increased only in glucose group (P < 0.001). Our results suggested that WBTCE protocol could be affected cellular energy sources and hormonal response. Furthermore, Glucose consumption can increase the lymphocyte glycogen and better energy within the cell.Keywords: glucose feeding, lymphocyte, Wrestling – base techniques circuit , exercise
Procedia PDF Downloads 2725802 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis
Procedia PDF Downloads 1375801 Modelling Conceptual Quantities Using Support Vector Machines
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression
Procedia PDF Downloads 2065800 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 3925799 Sustainable Production of Tin Oxide Nanoparticles: Exploring Synthesis Techniques, Formation Mechanisms, and Versatile Applications
Authors: Yemane Tadesse Gebreslassie, Henok Gidey Gebretnsae
Abstract:
Nanotechnology has emerged as a highly promising field of research with wide-ranging applications across various scientific disciplines. In recent years, tin oxide has garnered significant attention due to its intriguing properties, particularly when synthesized in the nanoscale range. While numerous physical and chemical methods exist for producing tin oxide nanoparticles, these approaches tend to be costly, energy-intensive, and involve the use of toxic chemicals. Given the growing concerns regarding human health and environmental impact, there has been a shift towards developing cost-effective and environmentally friendly processes for tin oxide nanoparticle synthesis. Green synthesis methods utilizing biological entities such as plant extracts, bacteria, and natural biomolecules have shown promise in successfully producing tin oxide nanoparticles. However, scaling up the production to an industrial level using green synthesis approaches remains challenging due to the complexity of biological substrates, which hinders the elucidation of reaction mechanisms and formation processes. Thus, this review aims to provide an overview of the various sources of biological entities and methodologies employed in the green synthesis of tin oxide nanoparticles, as well as their impact on nanoparticle properties. Furthermore, this research delves into the strides made in comprehending the mechanisms behind the formation of nanoparticles as documented in existing literature. It also sheds light on the array of analytical techniques employed to investigate and elucidate the characteristics of these minuscule particles.Keywords: nanotechnology, tin oxide, green synthesis, formation mechanisms
Procedia PDF Downloads 555798 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 4095797 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code
Procedia PDF Downloads 4135796 Improvement of Sleep Quality Through Manual and Non-Pharmacological Treatment
Authors: Andreas Aceranti, Sergio Romanò, Simonetta Vernocchi, Silvia Arnaboldi, Emilio Mazza
Abstract:
As a result of the Sars-Cov2 pandemic, the incidence of thymism disorders has significantly increased and, often, patients are reluctant to want to take drugs aimed at stabilizing mood. In order to provide an alternative approach to drug therapies, we have prepared a study in order to evaluate the possibility of improving the quality of life of these subjects through osteopathic treatment. Patients were divided into visceral and fascial manual treatment with the aim of increasing serotonin levels and stimulating the vagus nerve through validated techniques. The results were evaluated through the administration of targeted questionnaires in order to assess quality of life, mood, sleep and intestinal functioning. At a first endpoint we found, in patients undergoing fascial treatment, an increase in quality of life and sleep: in fact, they report a decrease in the number of nocturnal awakenings; a reduction in falling asleep times and greater rest upon waking. In contrast, patients undergoing visceral treatment, as well as those included in the control group, did not show significant improvements. Patients in the fascial group have, in fact, reported an improvement in thymism and subjective quality of life with a generalized improvement in function. Although the study is still ongoing, based on the results of the first endpoint we can hypothesize that fascial stimulation of the vagus nerve with manual and osteopathic techniques may be a valid alternative to pharmacological treatments in mood and sleep disorders.Keywords: ostheopathy, insomnia, noctural awakening, thymism
Procedia PDF Downloads 905795 Considerations upon Structural Health Monitoring of Small to Medium Wind Turbines
Authors: Nicolae Constantin, Ştefan Sorohan
Abstract:
The small and medium wind turbines are running in quite different conditions as compared to the big ones. Consequently, they need also a different approach concerning the structural health monitoring (SHM) issues. There are four main differences between the above mentioned categories: (i) significantly smaller dimensions, (ii) considerably higher rotation speed, (iii) generally small distance between the turbine and the energy consumer and (iv) monitoring assumed in many situations by the owner. In such conditions, nondestructive inspections (NDI) have to be made as much as possible with affordable, yet effective techniques, requiring portable and accessible equipment. Additionally, the turbines and accessories should be easy to mount, dispose and repair. As the materials used for such unit can be metals, composites and combined, the technologies should be adapted accordingly. An example in which the two materials co-exist is the situation in which the damaged metallic skin of a blade is repaired with a composite patch. The paper presents the inspection of the bonding state of the patch, using portable ultrasonic equipment, able to put in place the Lamb wave method, which proves efficient in global and local inspections as well. The equipment is relatively easy to handle and can be borrowed from specialized laboratories or used by a community of small wind turbine users, upon the case. This evaluation is the first in a row, aimed to evaluate efficiency of NDI performed with rather accessible, less sophisticated equipment and related inspection techniques, having field inspection capabilities. The main goal is to extend such inspection procedures to other components of the wind power unit, such as the support tower, water storage tanks, etc.Keywords: structural health monitoring, small wind turbines, non-destructive inspection, field inspection capabilities
Procedia PDF Downloads 3405794 Present-Day Transformations and Trends in Rooftop Agriculture and Food Security
Authors: Kiara Lawrence, Nadine Ponnusamy, Clive Greenstone
Abstract:
One of the major challenges facing society today is food security. The risks to food security have increased significantly due to the evolving urban landscape, globalization, and a rising population. The cultivation of food is essential, particularly during times of crisis, such as a recession, and has long been a necessity for urban populations. In contemporary society, many urban residents are confronted with new challenges, including high levels of unemployment, which compel individuals to adopt alternative survival strategies, such as growing their own food. Recently, rooftop agriculture has made significant contributions to urban and national food security and has been utilized as a tool to mitigate the frequent and damaging disasters that many cities encounter. They have the potential to transform unused spaces into green, productive vegetable plots, while also providing urban residents with the opportunity to enjoy the benefits of gardening. Therefore, this study looks to investigate the evolving themes around rooftop agriculture and food security globally. A bibliometric review analysis was carried out on Scopus and Web of Science using the keywords “rooftop agriculture” OR “rooftop farming” OR “rooftop garden” AND “food security” between 2004 and 2024 to ensure a broader scope was covered around the chosen study. Vosviewer software was then utilized to analyze the extracted data to create network visualization maps based on keyword occurrences, co-author analysis, country analysis. There were only 37 relevant documents within the study parameters. Preliminary results indicate that much research focused on urban agriculture, food supply, green roof, sustainability and climate change. By analysing these aspects of rooftop agriculture and food security, the trends can identify gaps in literature and dictate future applications to assist in food security.Keywords: food security, rooftop agriculture, rooftop farming, rooftop garden
Procedia PDF Downloads 195793 A Literature Review on Emotion Recognition Using Wireless Body Area Network
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction
Procedia PDF Downloads 525792 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 3355791 Study of the Hydrochemical Composition of Canal, Collector-Drainage and Ground Waters of Kura-Araz Plain and Modeling by GIS Method
Authors: Gurbanova Lamiya
Abstract:
The Republic of Azerbaijan is considered a region with limited water resources, as up to 70% of surface water is formed outside the country's borders, and most of its territory is an arid (dry) climate zone. It is located at the lower limit of transboundary flows, which is the weakest source of natural water resources in the South Caucasus. It is essential to correctly assess the quality of natural, collector-drainage and groundwater of the area and their suitability for irrigation in order to properly carry out land reclamation measures, provide the normal water-salt regime, and prevent repeated salinization. Through the 141-km-long main Mil-Mugan collector, groundwater, household waste, and floodwaters generated during floods and landslides are poured into the Caspian Sea. The hydrochemical composition of the samples taken from the Sabir irrigation canal passing through the center of the Kura-Araz plain, the Main Mil-Mugan Collector, and the groundwater of the region, which we chose as our research object, were studied and the obtained results were compared by periods. A model is proposed that allows for a complete visualization of the primary materials collected for the study area. The practical use of the established digital model provides all possibilities. The practical use of the established digital model provides all possibilities. An extensive database was created with the ArcGis 10.8 package, using publicly available LandSat satellite images as primary data in addition to ground surveys to build the model. The principles of the construction of the geographic information system of modern GIS technology were developed, the boundary and initial condition of the research area were evaluated, and forecasts and recommendations were given.Keywords: irrigation channel, groundwater, collector, meliorative measures
Procedia PDF Downloads 735790 Testing of Protective Coatings on Automotive Steel, a Correlation Between Salt Spray, Electrochemical Impedance Spectroscopy, and Linear Polarization Resistance Test
Authors: Dhanashree Aole, V. Hariharan, Swati Surushe
Abstract:
Corrosion can cause serious and expensive damage to the automobile components. Various proven techniques for controlling and preventing corrosion depend on the specific material to be protected. Electrochemical Impedance Spectroscopy (EIS) and salt spray tests are commonly used to assess the corrosion degradation mechanism of coatings on metallic surfaces. While, the only test which monitors the corrosion rate in real time is known as Linear Polarisation Resistance (LPR). In this study, electrochemical tests (EIS & LPR) and spray test are reviewed to assess the corrosion resistance and durability of different coatings. The main objective of this study is to correlate the test results obtained using linear polarization resistance (LPR) and Electrochemical Impedance Spectroscopy (EIS) with the results obtained using standard salt spray test. Another objective of this work is to evaluate the performance of various coating systems- CED, Epoxy, Powder coating, Autophoretic, and Zn-trivalent coating for vehicle underbody application. The corrosion resistance coating are assessed. From this study, a promising correlation between different corrosion testing techniques is noted. The most profound observation is that electrochemical tests gives quick estimation of corrosion resistance and can detect the degradation of coatings well before visible signs of damage appear. Furthermore, the corrosion resistances and salt spray life of the coatings investigated were found to be according to the order as follows- CED> powder coating > Autophoretic > epoxy coating > Zn- Trivalent plating.Keywords: Linear Polarization Resistance (LPR), Electrochemical Impedance Spectroscopy (EIS), salt spray test, sacrificial and barrier coatings
Procedia PDF Downloads 5275789 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: food industry, interferometric, oils, quality control
Procedia PDF Downloads 3735788 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 1005787 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 645786 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1565785 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 1695784 System Identification of Timber Masonry Walls Using Shaking Table Test
Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi
Abstract:
Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition
Procedia PDF Downloads 2665783 Developing Oral Communication Competence in a Second Language: The Communicative Approach
Authors: Ikechi Gilbert
Abstract:
Oral communication is the transmission of ideas or messages through the speech process. Acquiring competence in this area which, by its volatile nature, is prone to errors and inaccuracies would require the adoption of a well-suited teaching methodology. Efficient oral communication facilitates exchange of ideas and easy accomplishment of day-to-day tasks, by means of a demonstrated mastery of oral expression and the making of fine presentations to audiences or individuals while recognizing verbal signals and body language of others and interpreting them correctly. In Anglophone states such as Nigeria, Ghana, etc., the French language, for instance, is studied as a foreign language, being used majorly in teaching learners who have their own mother tongue different from French. The same applies to Francophone states where English is studied as a foreign language by people whose official language or mother tongue is different from English. The ideal approach would be to teach these languages in these environments through a pedagogical approach that properly takes care of the oral perspective for effective understanding and application by the learners. In this article, we are examining the communicative approach as a methodology for teaching oral communication in a foreign language. This method is a direct response to the communicative needs of the learner involving the use of appropriate materials and teaching techniques that meet those needs. It is also a vivid improvement to the traditional grammatical and audio-visual adaptations. Our contribution will focus on the pedagogical component of oral communication improvement, highlighting its merits and also proposing diverse techniques including aspects of information and communication technology that would assist the second language learner communicate better orally.Keywords: communication, competence, methodology, pedagogical component
Procedia PDF Downloads 2665782 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 1465781 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning
Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene
Abstract:
This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.Keywords: limit pressure of soil, xgboost, random forest, bearing capacity
Procedia PDF Downloads 235780 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 935779 Critical Approach to Define the Architectural Structure of a Health Prototype in a Rural Area of Brazil
Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Luca Preis
Abstract:
A primary healthcare facility in developing countries should be a multifunctional space able to respond to different requirements: Flexibility, modularity, aggregation and reversibility. These basic features could be better satisfied if applied to an architectural artifact that complies with the typological, figurative and constructive aspects of the context in which it is located. Therefore, the purpose of this paper is to identify a procedure that can define the figurative aspects of the architectural structure of the health prototype for the marginal areas of developing countries through a critical approach. The application context is the rural areas of the Northeast of Bahia in Brazil. The prototype should be located in the rural district of Quingoma, in the municipality of Lauro de Freitas, a particular place where there is still a cultural fusion of black and indigenous populations. Based on the historical analysis of settlement strategies and architectural structures in spaces of public interest or collective use, this paper aims to provide a procedure able to identify the categories and rules underlying typological and figurative aspects, in order to detect significant and generalizable elements, as well as materials and constructive techniques typically adopted in the rural areas of Brazil. The object of this work is therefore not only the recovery of certain constructive approaches but also the development of a procedure that integrates the requirements of the primary healthcare prototype with its surrounding economic, social, cultural, settlement and figurative conditions.Keywords: architectural typology, developing countries, local construction techniques, primary health care.
Procedia PDF Downloads 3255778 Modeling of Large Elasto-Plastic Deformations by the Coupled FE-EFGM
Authors: Azher Jameel, Ghulam Ashraf Harmain
Abstract:
In the recent years, the enriched techniques like the extended finite element method, the element free Galerkin method, and the Coupled finite element-element free Galerkin method have found wide application in modeling different types of discontinuities produced by cracks, contact surfaces, and bi-material interfaces. The extended finite element method faces severe mesh distortion issues while modeling large deformation problems. The element free Galerkin method does not have mesh distortion issues, but it is computationally more demanding than the finite element method. The coupled FE-EFGM proves to be an efficient numerical tool for modeling large deformation problems as it exploits the advantages of both FEM and EFGM. The present paper employs the coupled FE-EFGM to model large elastoplastic deformations in bi-material engineering components. The large deformation occurring in the domain has been modeled by using the total Lagrangian approach. The non-linear elastoplastic behavior of the material has been represented by the Ramberg-Osgood model. The elastic predictor-plastic corrector algorithms are used for the evaluation stresses during large deformation. Finally, several numerical problems are solved by the coupled FE-EFGM to illustrate its applicability, efficiency and accuracy in modeling large elastoplastic deformations in bi-material samples. The results obtained by the proposed technique are compared with the results obtained by XFEM and EFGM. A remarkable agreement was observed between the results obtained by the three techniques.Keywords: XFEM, EFGM, coupled FE-EFGM, level sets, large deformation
Procedia PDF Downloads 4485777 Biomass Energy: "The Boon for the Would"
Authors: Shubham Giri Goswami, Yogesh Tiwari
Abstract:
In today’s developing world, India and other countries are developing different instruments and accessories for the better standard and life to be happy and prosper. But rather than this we human-beings have been using different energy sources accordingly, many persons such as scientist, researchers etc have developed many Energy sources like renewable and non-renewable energy sources. Like fossil fuel, coal, gas, petroleum products as non-renewable sources, and solar, wind energy as renewable energy source. Thus all non-renewable energy sources, these all Created pollution as in form of air, water etc. due to ultimate use of these sources by human the future became uncertain. Thus to minimize all this environmental affects and destroy the healthy environment we discovered a solution as renewable energy source. Renewable energy source in form of biomass energy, solar, wind etc. We found different techniques in biomass energy, that good energy source for people. The domestic waste, and is a good source of energy as daily extract from cow in form of dung and many other domestic products naturally can be used eco-friendly fertilizers. Moreover, as from my point of view the cow is able to extract 08-12 kg of dung which can be used to make wormy compost fertilizers. Furthermore, the calf urine as insecticides and use of such a compounds will lead to destroy insects and thus decrease communicable diseases. Therefore, can be used by every person and biomass energy can be in those areas such as rural areas where non-renewable energy sources cannot reach easily. Biomass can be used to develop fertilizers, cow-dung plants and other power generation techniques, and this energy is clean and pollution free and is available everywhere thus saves our beautiful planet or blue or life giving planet called as “EARTH”. We can use the biomass energy, which may be boon for the world in future.Keywords: biomass, energy, environment, human, pollution, renewable, solar energy, sources, wind
Procedia PDF Downloads 526