Search results for: computational accuracy
1650 Analysis of One-Way and Two-Way FSI Approaches to Characterise the Flow Regime and the Mechanical Behaviour during Closing Manoeuvring Operation of a Butterfly Valve
Authors: M. Ezkurra, J. A. Esnaola, M. Martinez-Agirre, U. Etxeberria, U. Lertxundi, L. Colomo, M. Begiristain, I. Zurutuza
Abstract:
Butterfly valves are widely used industrial piping components as on-off and flow controlling devices. The main challenge in the design process of this type of valves is the correct dimensioning to ensure proper mechanical performance as well as to minimise flow losses that affect the efficiency of the system. Butterfly valves are typically dimensioned in a closed position based on mechanical approaches considering uniform hydrostatic pressure, whereas the flow losses are analysed by means of CFD simulations. The main limitation of these approaches is that they do not consider either the influence of the dynamics of the manoeuvring stage or coupled phenomena. Recent works have included the influence of the flow on the mechanical behaviour for different opening angles by means of one-way FSI approach. However, these works consider steady-state flow for the selected angles, not capturing the effect of the transient flow evolution during the manoeuvring stage. Two-way FSI modelling approach could allow overcoming such limitations providing more accurate results. Nevertheless, the use of this technique is limited due to the increase in the computational cost. In the present work, the applicability of FSI one-way and two-way approaches is evaluated for the analysis of butterfly valves, showing that not considering fluid-structure coupling involves not capturing the most critical situation for the valve disc.Keywords: butterfly valves, fluid-structure interaction, one-way approach, two-way approach
Procedia PDF Downloads 1621649 Implicit and Explicit Mechanisms of Emotional Contagion
Authors: Andres Pinilla Palacios, Ricardo Tamayo
Abstract:
Emotional contagion is characterized as an automatic tendency to synchronize behaviors that facilitate emotional convergence among humans. It might thus play a pivotal role to understand the dynamics of key social interactions. However, a few research has investigated its potential mechanisms. We suggest two complementary but independent processes that may underlie emotional contagion. The efficient contagion hypothesis, based on fast and implicit bottom-up processes, modulated by familiarity and spread of activation in the emotional associative networks of memory. Secondly, the emotional contrast hypothesis, based on slow and explicit top-down processes guided by deliberated appraisal and hypothesis-testing. In order to assess these two hypotheses, an experiment with 39 participants was conducted. In the first phase, participants were induced (between-groups) to an emotional state (positive, neutral or negative) using a standardized video taken from the FilmStim database. In the second phase, participants classified and rated (within-subject) the emotional state of 15 faces (5 for each emotional state) taken from the POFA database. In the third phase, all participants were returned to a baseline emotional state using the same neutral video used in the first phase. In a fourth phase, participants classified and rated a new set of 15 faces. The accuracy in the identification and rating of emotions was partially explained by the efficient contagion hypothesis, but the speed with which these judgments were made was partially explained by the emotional contrast hypothesis. However, results are ambiguous, so a follow-up experiment is proposed in which emotional expressions and activation of the sympathetic system will be measured using EMG and EDA respectively.Keywords: electromyography, emotional contagion, emotional valence, identification of emotions, imitation
Procedia PDF Downloads 3161648 Development of an Intelligent Decision Support System for Smart Viticulture
Authors: C. M. Balaceanu, G. Suciu, C. S. Bosoc, O. Orza, C. Fernandez, Z. Viniczay
Abstract:
The Internet of Things (IoT) represents the best option for smart vineyard applications, even if it is necessary to integrate the technologies required for the development. This article is based on the research and the results obtained in the DISAVIT project. For Smart Agriculture, the project aims to provide a trustworthy, intelligent, integrated vineyard management solution that is based on the IoT. To have interoperability through the use of a multiprotocol technology (being the future connected wireless IoT) it is necessary to adopt an agnostic approach, providing a reliable environment to address cyber security, IoT-based threats and traceability through blockchain-based design, but also creating a concept for long-term implementations (modular, scalable). The ones described above represent the main innovative technical aspects of this project. The DISAVIT project studies and promotes the incorporation of better management tools based on objective data-based decisions, which are necessary for agriculture adapted and more resistant to climate change. It also exploits the opportunities generated by the digital services market for smart agriculture management stakeholders. The project's final result aims to improve decision-making, performance, and viticulturally infrastructure and increase real-time data accuracy and interoperability. Innovative aspects such as end-to-end solutions, adaptability, scalability, security and traceability, place our product in a favorable situation over competitors. None of the solutions in the market meet every one of these requirements by a unique product being innovative.Keywords: blockchain, IoT, smart agriculture, vineyard
Procedia PDF Downloads 2001647 Recommendation Systems for Cereal Cultivation using Advanced Casual Inference Modeling
Authors: Md Yeasin, Ranjit Kumar Paul
Abstract:
In recent years, recommendation systems have become indispensable tools for agricultural system. The accurate and timely recommendations can significantly impact crop yield and overall productivity. Causal inference modeling aims to establish cause-and-effect relationships by identifying the impact of variables or factors on outcomes, enabling more accurate and reliable recommendations. New advancements in causal inference models have been found in the literature. With the advent of the modern era, deep learning and machine learning models have emerged as efficient tools for modeling. This study proposed an innovative approach to enhance recommendation systems-based machine learning based casual inference model. By considering the causal effect and opportunity cost of covariates, the proposed system can provide more reliable and actionable recommendations for cereal farmers. To validate the effectiveness of the proposed approach, experiments are conducted using cereal cultivation data of eastern India. Comparative evaluations are performed against existing correlation-based recommendation systems, demonstrating the superiority of the advanced causal inference modeling approach in terms of recommendation accuracy and impact on crop yield. Overall, it empowers farmers with personalized recommendations tailored to their specific circumstances, leading to optimized decision-making and increased crop productivity.Keywords: agriculture, casual inference, machine learning, recommendation system
Procedia PDF Downloads 791646 Determining Optimal Number of Trees in Random Forests
Authors: Songul Cinaroglu
Abstract:
Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.Keywords: classification methods, decision trees, number of trees, random forest
Procedia PDF Downloads 3951645 Comparative Evaluation of Pharmacologically Guided Approaches (PGA) to Determine Maximum Recommended Starting Dose (MRSD) of Monoclonal Antibodies for First Clinical Trial
Authors: Ibraheem Husain, Abul Kalam Najmi, Karishma Chester
Abstract:
First-in-human (FIH) studies are a critical step in clinical development of any molecule that has shown therapeutic promise in preclinical evaluations, since preclinical research and safety studies into clinical development is a crucial step for successful development of monoclonal antibodies for guidance in pharmaceutical industry for the treatment of human diseases. Therefore, comparison between USFDA and nine pharmacologically guided approaches (PGA) (simple allometry, maximum life span potential, brain weight, rule of exponent (ROE), two species methods and one species methods) were made to determine maximum recommended starting dose (MRSD) for first in human clinical trials using four drugs namely Denosumab, Bevacizumab, Anakinra and Omalizumab. In our study, the predicted pharmacokinetic (pk) parameters and the estimated first-in-human dose of antibodies were compared with the observed human values. The study indicated that the clearance and volume of distribution of antibodies can be predicted with reasonable accuracy in human and a good estimate of first human dose can be obtained from the predicted human clearance and volume of distribution. A pictorial method evaluation chart was also developed based on fold errors for simultaneous evaluation of various methods.Keywords: clinical pharmacology (CPH), clinical research (CRE), clinical trials (CTR), maximum recommended starting dose (MRSD), clearance and volume of distribution
Procedia PDF Downloads 3741644 Airflow Characteristics and Thermal Comfort of Air Diffusers: A Case Study
Authors: Tolga Arda Eraslan
Abstract:
The quality of the indoor environment is significant to occupants’ health, comfort, and productivity, as Covid-19 spread throughout the world, people started spending most of their time indoors. Since buildings are getting bigger, mechanical ventilation systems are widely used where natural ventilation is insufficient. Four primary tasks of a ventilation system have been identified indoor air quality, comfort, contamination control, and energy performance. To fulfill such requirements, air diffusers, which are a part of the ventilation system, have begun to enter our lives in different airflow distribution systems. Detailed observations are needed to assure that such devices provide high levels of comfort effectiveness and energy efficiency. This study addresses these needs. The objective of this article is to observe air characterizations of different air diffusers at different angles and their effect on people by the thermal comfort model in CFD simulation and to validate the outputs with the help of data results based on a simulated office room. Office room created to provide validation; Equipped with many thermal sensors, including head height, tabletop, and foot level. In addition, CFD simulations were carried out by measuring the temperature and velocity of the air coming out of the supply diffuser. The results considering the flow interaction between diffusers and surroundings showed good visual illustration.Keywords: computational fluid dynamics, fanger’s model, predicted mean vote, thermal comfort
Procedia PDF Downloads 1181643 Voltage Stability Margin-Based Approach for Placement of Distributed Generators in Power Systems
Authors: Oludamilare Bode Adewuyi, Yanxia Sun, Isaiah Gbadegesin Adebayo
Abstract:
Voltage stability analysis is crucial to the reliable and economic operation of power systems. The power system of developing nations is more susceptible to failures due to the continuously increasing load demand, which is not matched with generation increase and efficient transmission infrastructures. Thus, most power systems are heavily stressed, and the planning of extra generation from distributed generation sources needs to be efficiently done so as to ensure the security of the power system. Some voltage stability index-based approach for DG siting has been reported in the literature. However, most of the existing voltage stability indices, though sufficient, are found to be inaccurate, especially for overloaded power systems. In this paper, the performance of a relatively different approach using a line voltage stability margin indicator, which has proven to have better accuracy, has been presented and compared with a conventional line voltage stability index for DG siting using the Nigerian 28 bus system. Critical boundary index (CBI) for voltage stability margin estimation was deployed to identify suitable locations for DG placement, and the performance was compared with DG placement using the Novel Line Stability Index (NLSI) approach. From the simulation results, both CBI and NLSI agreed greatly on suitable locations for DG on the test system; while CBI identified bus 18 as the most suitable at system overload, NLSI identified bus 8 to be the most suitable. Considering the effect of the DG placement at the selected buses on the voltage magnitude profile, the result shows that the DG placed on bus 18 identified by CBI improved the performance of the power system better.Keywords: voltage stability analysis, voltage collapse, voltage stability index, distributed generation
Procedia PDF Downloads 931642 Evaluation of Vehicle Classification Categories: Florida Case Study
Authors: Ren Moses, Jaqueline Masaki
Abstract:
This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic
Procedia PDF Downloads 1801641 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques
Authors: Tomas Trainys, Algimantas Venckauskas
Abstract:
Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.
Procedia PDF Downloads 1501640 Investigating the Role of Dystrophin in Neuronal Homeostasis
Authors: Samantha Shallop, Hakinya Karra, Tytus Bernas, Gladys Shaw, Gretchen Neigh, Jeffrey Dupree, Mathula Thangarajh
Abstract:
Abnormal neuronal homeostasis is considered a structural correlate of cognitive deficits in Duchenne Muscular Dystrophy. Neurons are highly polarized cells with multiple dendrites but a single axon. Trafficking of cellular organelles are highly regulated, with the cargo in the somatodendritic region of the neuron not permitted to enter the axonal compartment. We investigated the molecular mechanisms that regular organelle trafficking in neurons using a multimodal approach, including high-resolution structural illumination, proteomics, immunohistochemistry, and computational modeling. We investigated the expression of ankyrin-G, the master regulator controlling neuronal polarity. The expression of ankyrin G and the morphology of the axon initial segment was profoundly abnormal in the CA1 hippocampal neurons in the mdx52 animal model of DMD. Ankyrin-G colocalized with kinesin KIF5a, the anterograde protein transporter, with higher levels in older mdx52 mice than younger mdx52 mice. These results suggest that the functional trafficking from the somatodendritic compartment is abnormal. Our data suggests that dystrophin deficiency compromised neuronal homeostasis via ankyrin-G-based mechanisms.Keywords: neurons, axonal transport, duchenne muscular dystrophy, organelle transport
Procedia PDF Downloads 951639 Enhancing Aerodynamic Performance of Savonius Vertical Axis Turbine Used with Triboelectric Generator
Authors: Bhavesh Dadhich, Fenil Bamnoliya, Akshita Swaminathan
Abstract:
This project aims to design a system to generate energy from flowing wind due to the motion of a vehicle on the road or from the flow of wind in compact areas to utilize the wasteful energy into a useful one. It is envisaged through a design and aerodynamic performance improvement of a Savonius vertical axis wind turbine rotor and used in an integrated system with a Triboelectric Nanogenerator (TENG) that can generate a good amount of electrical energy. Aerodynamic calculations are performed numerically using Computational Fluid Dynamics software, and TENG's performance is evaluated analytically. The Turbine's coefficient of power is validated with published results for an inlet velocity of 7 m/s with a Tip Speed Ratio of 0.75 and found to reasonably agree with that of experiment results. The baseline design is modified with a new blade arc angle and rotor position angle based on the recommended parameter ranges suggested by previous researchers. Simulations have been performed for different T.S.R. values ranging from 0.25 to 1.5 with an interval of 0.25 with two applicable free stream velocities of 5 m/s and 7m/s. Finally, the newly designed VAWT CFD performance results are used as input for the analytical performance prediction of the triboelectric nanogenerator. The results show that this approach could be feasible and useful for small power source applications.Keywords: savonius turbine, power, overlap ratio, tip speed ratio, TENG
Procedia PDF Downloads 1221638 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 1891637 Human Gesture Recognition for Real-Time Control of Humanoid Robot
Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa
Abstract:
There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee
Procedia PDF Downloads 4071636 Study and Conservation of Cultural and Natural Heritages with the Use of Laser Scanner and Processing System for 3D Modeling Spatial Data
Authors: Julia Desiree Velastegui Caceres, Luis Alejandro Velastegui Caceres, Oswaldo Padilla, Eduardo Kirby, Francisco Guerrero, Theofilos Toulkeridis
Abstract:
It is fundamental to conserve sites of natural and cultural heritage with any available technique or existing methodology of preservation in order to sustain them for the following generations. We propose a further skill to protect the actual view of such sites, in which with high technology instrumentation we are able to digitally preserve natural and cultural heritages applied in Ecuador. In this project the use of laser technology is presented for three-dimensional models, with high accuracy in a relatively short period of time. In Ecuador so far, there are not any records on the use and processing of data obtained by this new technological trend. The importance of the project is the description of the methodology of the laser scanner system using the Faro Laser Scanner Focus 3D 120, the method for 3D modeling of geospatial data and the development of virtual environments in the areas of Cultural and Natural Heritage. In order to inform users this trend in technology in which three-dimensional models are generated, the use of such tools has been developed to be able to be displayed in all kinds of digitally formats. The results of the obtained 3D models allows to demonstrate that this technology is extremely useful in these areas, but also indicating that each data campaign needs an individual slightly different proceeding starting with the data capture and processing to obtain finally the chosen virtual environments.Keywords: laser scanner system, 3D model, cultural heritage, natural heritage
Procedia PDF Downloads 3061635 Chemometric Determination of the Geographical Origin of Milk Samples in Malaysia
Authors: Shima Behkami, Nor Shahirul Umirah Idris, Sharifuddin Md. Zain, Kah Hin Low, Mehrdad Gholami, Nima A. Behkami, Ahmad Firdaus Kamaruddin
Abstract:
In this work, Inductively Coupled Plasma Mass Spectrometry (ICP-MS), Isotopic Ratio Mass Spectrometry (IRMS) and Ultrasound Milko Tester were used to study milk samples obtained from various geographical locations in Malaysia. ICP-MS was used to determine the concentration of trace elements in milk, water and soil samples obtained from seven dairy farms at different geographical locations in peninsular Malaysia. IRMS was used to analyze the milk samples for isotopic ratios of δ13C, 15N and 18O. Nutritional parameters in the milk samples were determined using an ultrasound milko tester. Data obtained from these measurements were evaluated by Principal Component Analysis (PCA) and Hierarchical Analysis (HA) as a preliminary step in determining geographical origin of these milk samples. It is observed that the isotopic ratios and a number of the nutritional parameters are responsible for the discrimination of the samples. It was also observed that it is possible to determine the geographical origin of these milk samples solely by the isotopic ratios of δ13C, 15N and 18O. The accuracy of the geographical discrimination is demonstrated when several milk samples from a milk factory taken from one of the regions under study were appropriately assigned to the correct PCA cluster.Keywords: inductively coupled plasma mass spectroscopy ICP-MS, isotope ratio mass spectroscopy IRMS, ultrasound, principal component analysis, hierarchical analysis, geographical origin, milk
Procedia PDF Downloads 3701634 Numerical Investigation of AL₂O₃ Nanoparticle Effect on a Boiling Forced Swirl Flow Field
Authors: Ataollah Rabiee1, Amir Hossein Kamalinia, Alireza Atf
Abstract:
One of the most important issues in the design of nuclear fusion power plants is the heat removal from the hottest region at the diverter. Various methods could be employed in order to improve the heat transfer efficiency, such as generating turbulent flow and injection of nanoparticles in the host fluid. In the current study, Water/AL₂O₃ nanofluid forced swirl flow boiling has been investigated by using a homogeneous thermophysical model within the Eulerian-Eulerian framework through a twisted tape tube, and the boiling phenomenon was modeled using the Rensselaer Polytechnic Institute (RPI) approach. In addition to comparing the results with the experimental data and their reasonable agreement, it was evidenced that higher flow mixing results in more uniform bulk temperature and lower wall temperature along the twisted tape tube. The presence of AL₂O₃ nanoparticles in the boiling flow field showed that increasing the nanoparticle concentration leads to a reduced vapor volume fraction and wall temperature. The Computational fluid dynamics (CFD) results show that the average heat transfer coefficient in the tube increases both by increasing the nanoparticle concentration and the insertion of twisted tape, which significantly affects the thermal field of the boiling flow.Keywords: nanoparticle, boiling, CFD, two phase flow, alumina, ITER
Procedia PDF Downloads 1251633 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem
Authors: Brandon Foggo, Nanpeng Yu
Abstract:
Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.Keywords: distribution network, machine learning, network topology, phase identification, smart grid
Procedia PDF Downloads 2991632 Verification of Sr-90 Determination in Water and Spruce Needles Samples Using IAEA-TEL-2016-04 ALMERA Proficiency Test Samples
Authors: S. Visetpotjanakit, N. Nakkaew
Abstract:
Determination of 90Sr in environmental samples has been widely developed with several radioanlytical methods and radiation measurement techniques since 90Sr is one of the most hazardous radionuclides produced from nuclear reactors. Liquid extraction technique using di-(2-ethylhexyl) phosphoric acid (HDEHP) to separate and purify 90Y and Cherenkov counting using liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed and performed at our institute, the Office of Atoms for Peace. The approach is inexpensive, non-laborious, and fast to analyse 90Sr in environmental samples. To validate our analytical performance for the accurate and precise criteria, determination of 90Sr using the IAEA-TEL-2016-04 ALMERA proficiency test samples were performed for statistical evaluation. The experiment used two spiked tap water samples and one naturally contaminated spruce needles sample from Austria collected shortly after the Chernobyl accident. Results showed that all three analyses were successfully passed in terms of both accuracy and precision criteria, obtaining “Accepted” statuses. The two water samples obtained the measured results of 15.54 Bq/kg and 19.76 Bq/kg, which had relative bias 5.68% and -3.63% for the Maximum Acceptable Relative Bias (MARB) 15% and 20%, respectively. And the spruce needles sample obtained the measured results of 21.04 Bq/kg, which had relative bias 23.78% for the MARB 30%. These results confirm our analytical performance of 90Sr determination in water and spruce needles samples using the same developed method.Keywords: ALMERA proficiency test, Cerenkov counting, determination of 90Sr, environmental samples
Procedia PDF Downloads 2321631 Physicochemical Characterization of Coastal Aerosols over the Mediterranean Comparison with Weather Research and Forecasting-Chem Simulations
Authors: Stephane Laussac, Jacques Piazzola, Gilles Tedeschi
Abstract:
Estimation of the impact of atmospheric aerosols on the climate evolution is an important scientific challenge. One of a major source of particles is constituted by the oceans through the generation of sea-spray aerosols. In coastal areas, marine aerosols can affect air quality through their ability to interact chemically and physically with other aerosol species and gases. The integration of accurate sea-spray emission terms in modeling studies is then required. However, it was found that sea-spray concentrations are not represented with the necessary accuracy in some situations, more particularly at short fetch. In this study, the WRF-Chem model was implemented on a North-Western Mediterranean coastal region. WRF-Chem is the Weather Research and Forecasting (WRF) model online-coupled with chemistry for investigation of regional-scale air quality which simulates the emission, transport, mixing, and chemical transformation of trace gases and aerosols simultaneously with the meteorology. One of the objectives was to test the ability of the WRF-Chem model to represent the fine details of the coastal geography to provide accurate predictions of sea spray evolution for different fetches and the anthropogenic aerosols. To assess the performance of the model, a comparison between the model predictions using a local emission inventory and the physicochemical analysis of aerosol concentrations measured for different wind direction on the island of Porquerolles located 10 km south of the French Riviera is proposed.Keywords: sea-spray aerosols, coastal areas, sea-spray concentrations, short fetch, WRF-Chem model
Procedia PDF Downloads 1951630 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1781629 De Novo Design of a Minimal Catalytic Di-Nickel Peptide Capable of Sustained Hydrogen Evolution
Authors: Saroj Poudel, Joshua Mancini, Douglas Pike, Jennifer Timm, Alexei Tyryshkin, Vikas Nanda, Paul Falkowski
Abstract:
On the early Earth, protein-metal complexes likely harvested energy from a reduced environment. These complexes would have been precursors to the metabolic enzymes of ancient organisms. Hydrogenase is an essential enzyme in most anaerobic organisms for the reduction and oxidation of hydrogen in the environment and is likely one of the earliest evolved enzymes. To attempt to reinvent a precursor to modern hydrogenase, we computationally designed a short thirteen amino acid peptide that binds the often-required catalytic transition metal Nickel in hydrogenase. This simple complex can achieve hundreds of hydrogen evolution cycles using light energy in a broad range of temperature and pH. Biophysical and structural investigations strongly indicate the peptide forms a di-nickel active site analogous to Acetyl-CoA synthase, an ancient protein central to carbon reduction in the Wood-Ljungdahl pathway and capable of hydrogen evolution. This work demonstrates that prior to the complex evolution of multidomain enzymes, early peptide-metal complexes could have catalyzed energy transfer from the environment on the early Earth and enabled the evolution of modern metabolismKeywords: hydrogenase, prebiotic enzyme, metalloenzyme, computational design
Procedia PDF Downloads 2161628 Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels
Authors: Anders Schou Simonsen, Thomas Condra, Kim Sørensen
Abstract:
Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.Keywords: CFD, coupling, discrete phase, parcel
Procedia PDF Downloads 2671627 Detection of Important Biological Elements in Drug-Drug Interaction Occurrence
Authors: Reza Ferdousi, Reza Safdari, Yadollah Omidi
Abstract:
Drug-drug interactions (DDIs) are main cause of the adverse drug reactions and nature of the functional and molecular complexity of drugs behavior in human body make them hard to prevent and treat. With the aid of new technologies derived from mathematical and computational science the DDIs problems can be addressed with minimum cost and efforts. Market basket analysis is known as powerful method to identify co-occurrence of thing to discover patterns and frequency of the elements. In this research, we used market basket analysis to identify important bio-elements in DDIs occurrence. For this, we collected all known DDIs from DrugBank. The obtained data were analyzed by market basket analysis method. We investigated all drug-enzyme, drug-carrier, drug-transporter and drug-target associations. To determine the importance of the extracted bio-elements, extracted rules were evaluated in terms of confidence and support. Market basket analysis of the over 45,000 known DDIs reveals more than 300 important rules that can be used to identify DDIs, CYP 450 family were the most frequent shared bio-elements. We applied extracted rules over 2,000,000 unknown drug pairs that lead to discovery of more than 200,000 potential DDIs. Analysis of the underlying reason behind the DDI phenomena can help to predict and prevent DDI occurrence. Ranking of the extracted rules based on strangeness of them can be a supportive tool to predict the outcome of an unknown DDI.Keywords: drug-drug interaction, market basket analysis, rule discovery, important bio-elements
Procedia PDF Downloads 3091626 Analysis and Experimental Research on the Influence of Lubricating Oil on the Transmission Efficiency of New Energy Vehicle Gearbox
Authors: Chen Yong, Bi Wangyang, Zang Libin, Li Jinkai, Cheng Xiaowei, Liu Jinmin, Yu Miao
Abstract:
New energy vehicle power transmission systems continue to develop in the direction of high torque, high speed, and high efficiency. The cooling and lubrication of the motor and the transmission system are integrated, and new requirements are placed on the lubricants for the transmission system. The effects of traditional lubricants and special lubricants for new energy vehicles on transmission efficiency were studied through experiments and simulation methods. A mathematical model of the transmission efficiency of the lubricating oil in the gearbox was established. The power loss of each part was analyzed according to the working conditions. The relationship between the speed and the characteristics of different lubricating oil products on the power loss of the stirring oil was discussed. The minimum oil film thickness was required for the life of the gearbox. The accuracy of the calculation results was verified by the transmission efficiency test conducted on the two-motor integrated test bench. The results show that the efficiency increases first and then decreases with the increase of the speed and decreases with the increase of the kinematic viscosity of the lubricant. The increase of the kinematic viscosity amplifies the transmission power loss caused by the high speed. New energy vehicle special lubricants have less attenuation of transmission efficiency in the range above mid-speed. The research results provide a theoretical basis and guidance for the evaluation and selection of transmission efficiency of gearbox lubricants for new energy vehicles.Keywords: new energy vehicles, lubricants, transmission efficiency, kinematic viscosity, test and simulation
Procedia PDF Downloads 1311625 Linear Prediction System in Measuring Glucose Level in Blood
Authors: Intan Maisarah Abd Rahim, Herlina Abdul Rahim, Rashidah Ghazali
Abstract:
Diabetes is a medical condition that can lead to various diseases such as stroke, heart disease, blindness and obesity. In clinical practice, the concern of the diabetic patients towards the blood glucose examination is rather alarming as some of the individual describing it as something painful with pinprick and pinch. As for some patient with high level of glucose level, pricking the fingers multiple times a day with the conventional glucose meter for close monitoring can be tiresome, time consuming and painful. With these concerns, several non-invasive techniques were used by researchers in measuring the glucose level in blood, including ultrasonic sensor implementation, multisensory systems, absorbance of transmittance, bio-impedance, voltage intensity, and thermography. This paper is discussing the application of the near-infrared (NIR) spectroscopy as a non-invasive method in measuring the glucose level and the implementation of the linear system identification model in predicting the output data for the NIR measurement. In this study, the wavelengths considered are at the 1450 nm and 1950 nm. Both of these wavelengths showed the most reliable information on the glucose presence in blood. Then, the linear Autoregressive Moving Average Exogenous model (ARMAX) model with both un-regularized and regularized methods was implemented in predicting the output result for the NIR measurement in order to investigate the practicality of the linear system in this study. However, the result showed only 50.11% accuracy obtained from the system which is far from the satisfying results that should be obtained.Keywords: diabetes, glucose level, linear, near-infrared, non-invasive, prediction system
Procedia PDF Downloads 1591624 The Inattentional Blindness Paradigm: A Breaking Wave for Attentional Biases in Test Anxiety
Authors: Kritika Kulhari, Aparna Sahu
Abstract:
Test anxiety results from concerns about failure in examinations or evaluative situations. Attentional biases are known to pronounce the symptomatic expression of test anxiety. In recent times, the inattentional blindness (IB) paradigm has shown promise as an attention bias modification treatment (ABMT) for anxiety by overcoming practice and expectancy effects which preexisting paradigms fail to counter. The IB paradigm assesses the inability of an individual to attend to a stimulus that appears suddenly while indulging in a perceptual discrimination task. The present study incorporated an IB task with three critical items (book, face, and triangle) appearing randomly in the perceptual discrimination task. Attentional biases were assessed as detection and identification of the critical item. The sample (N = 50) consisted of low test anxiety (LTA) and high test anxiety (HTA) groups based on the reactions to tests scale scores. Test threat manipulation was done with pre- and post-test assessment of test anxiety using the State Test Anxiety Inventory. A mixed factorial design with gender, test anxiety, presence or absence of test threat, and critical items was conducted to assess their effects on attentional biases. Results showed only a significant main effect for test anxiety on detection with higher accuracy of detection of the critical item for the LTA group. The study presents promising results in the realm of ABMT for test anxiety.Keywords: attentional bias, attentional bias modification treatment, inattentional blindness, test anxiety
Procedia PDF Downloads 2251623 Computational Agent-Based Approach for Addressing the Consequences of Releasing Gene Drive Mosquito to Control Malaria
Authors: Imran Hashmi, Sipkaduwa Arachchige Sashika Sureni Wickramasooriya
Abstract:
Gene-drive technology has emerged as a promising tool for disease control by influencing the population dynamics of disease-carrying organisms. Various gene drive mechanisms, derived from global laboratory experiments, aim to strategically manage and prevent the spread of targeted diseases. One prominent strategy involves population replacement, wherein genetically modified mosquitoes are introduced to replace the existing local wild population. To enhance our understanding and aid in the design of effective release strategies, we employ a comprehensive mathematical model. The utilized approach employs agent-based modeling, enabling the consideration of individual mosquito attributes and flexibility in parameter manipulation. Through the integration of an agent-based model and a meta-population spatial approach, the dynamics of gene drive mosquito spreading in a released site are simulated. The model's outcomes offer valuable insights into future population dynamics, providing guidance for the development of informed release strategies. This research significantly contributes to the ongoing discourse on the responsible and effective implementation of gene drive technology for disease vector control.Keywords: gene drive, agent-based modeling, disease-carrying organisms, malaria
Procedia PDF Downloads 651622 Determination of Direct Solar Radiation Using Atmospheric Physics Models
Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote
Abstract:
This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition
Procedia PDF Downloads 4091621 Machine Learning Algorithms for Rocket Propulsion
Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo
Abstract:
In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion
Procedia PDF Downloads 115