Search results for: computational accuracy
2101 A Study of the Trade-off Energy Consumption-Performance-Schedulability for DVFS Multicore Systems
Authors: Jalil Boudjadar
Abstract:
Dynamic Voltage and Frequency Scaling (DVFS) multicore platforms are promising execution platforms that enable high computational performance, less energy consumption and flexibility in scheduling the system processes. However, the resulting interleaving and memory interference together with per-core frequency tuning make real-time guarantees hard to be delivered. Besides, energy consumption represents a strong constraint for the deployment of such systems on energy-limited settings. Identifying the system configurations that would achieve a high performance and consume less energy while guaranteeing the system schedulability is a complex task in the design of modern embedded systems. This work studies the trade-off between energy consumption, cores utilization and memory bottleneck and their impact on the schedulability of DVFS multicore time-critical systems with a hierarchy of shared memories. We build a model-based framework using Parametrized Timed Automata of UPPAAL to analyze the mutual impact of performance, energy consumption and schedulability of DVFS multicore systems, and demonstrate the trade-off on an actual case study.Keywords: time-critical systems, multicore systems, schedulability analysis, energy consumption, performance analysis
Procedia PDF Downloads 1072100 Hydrogen Storage Optimisation: Development of Advanced Tools for Improved Permeability Modelling in Materials
Authors: Sirine Sayed, Mahrez Ait Mohammed, Mourad Nachtane, Abdelwahed Barkaoui, Khalid Bouziane, Mostapha Tarfaoui
Abstract:
This study addresses a critical challenge in transitioning to a hydrogen-based economy by introducing and validating a one-dimensional (1D) tool for modelling hydrogen permeability through hybrid materials, focusing on tank applications. The model developed integrates rigorous experimental validation, published data, and advanced computational modelling using the PanDiffusion framework, significantly enhancing its validity and applicability. By elucidating complex interactions between material properties, storage system configurations, and operational parameters, the tool demonstrates its capability to optimize design and operational parameters in real-world scenarios, as illustrated through a case study of hydrogen leakage. This comprehensive approach to assessing hydrogen permeability contributes significantly to overcoming key barriers in hydrogen infrastructure development, potentially accelerating the widespread adoption of hydrogen technology across various industrial sectors and marking a crucial step towards a more sustainable energy future.Keywords: hydrogen storage, composite tank, permeability modelling, PanDiffusion, energy carrier, transportation technology
Procedia PDF Downloads 152099 Analytical Model to Predict the Shear Capacity of Reinforced Concrete Beams Externally Strengthened with CFRP Composites Conditions
Authors: Rajai Al-Rousan
Abstract:
This paper presents a proposed analytical model for predicting the shear strength of reinforced concrete beams strengthened with CFRP composites as external reinforcement. The proposed analytical model can predict the shear contribution of CFRP composites of RC beams with an acceptable coefficient of correlation with the tested results. Based on the comparison of the proposed model with the published well-known models (ACI model, Triantafillou model, and Colotti model), the ACI model had a wider range of 0.16 to 10.08 for the ratio between tested and predicted ultimate shears at failure. Also, an acceptable range of 0.27 to 2.78 for the ratio between tested and predicted ultimate shears by the Triantafillou model. Finally, the best prediction (the ratio between the tested and predicted ones) of the ultimate shear capacity is observed by using Colotti model with a range of 0.20 to 1.78. Thus, the contribution of the CFRP composites as external reinforcement can be predicted with high accuracy by using the proposed analytical model.Keywords: predicting, shear capacity, reinforced concrete, beams, strengthened, externally, CFRP composites
Procedia PDF Downloads 2292098 Video Text Information Detection and Localization in Lecture Videos Using Moments
Authors: Belkacem Soundes, Guezouli Larbi
Abstract:
This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.Keywords: text detection, text localization, lecture videos, pseudo zernike moments
Procedia PDF Downloads 1522097 Cardiokey: A Binary and Multi-Class Machine Learning Approach to Identify Individuals Using Electrocardiographic Signals on Wearable Devices
Authors: S. Chami, J. Chauvin, T. Demarest, Stan Ng, M. Straus, W. Jahner
Abstract:
Biometrics tools such as fingerprint and iris are widely used in industry to protect critical assets. However, their vulnerability and lack of robustness raise several worries about the protection of highly critical assets. Biometrics based on Electrocardiographic (ECG) signals is a robust identification tool. However, most of the state-of-the-art techniques have worked on clinical signals, which are of high quality and less noisy, extracted from wearable devices like a smartwatch. In this paper, we are presenting a complete machine learning pipeline that identifies people using ECG extracted from an off-person device. An off-person device is a wearable device that is not used in a medical context such as a smartwatch. In addition, one of the main challenges of ECG biometrics is the variability of the ECG of different persons and different situations. To solve this issue, we proposed two different approaches: per person classifier, and one-for-all classifier. The first approach suggests making binary classifier to distinguish one person from others. The second approach suggests a multi-classifier that distinguishes the selected set of individuals from non-selected individuals (others). The preliminary results, the binary classifier obtained a performance 90% in terms of accuracy within a balanced data. The second approach has reported a log loss of 0.05 as a multi-class score.Keywords: biometrics, electrocardiographic, machine learning, signals processing
Procedia PDF Downloads 1422096 Human Digital Twin for Personal Conversation Automation Using Supervised Machine Learning Approaches
Authors: Aya Salama
Abstract:
Digital Twin is an emerging research topic that attracted researchers in the last decade. It is used in many fields, such as smart manufacturing and smart healthcare because it saves time and money. It is usually related to other technologies such as Data Mining, Artificial Intelligence, and Machine Learning. However, Human digital twin (HDT), in specific, is still a novel idea that still needs to prove its feasibility. HDT expands the idea of Digital Twin to human beings, which are living beings and different from the inanimate physical entities. The goal of this research was to create a Human digital twin that is responsible for real-time human replies automation by simulating human behavior. For this reason, clustering, supervised classification, topic extraction, and sentiment analysis were studied in this paper. The feasibility of the HDT for personal replies generation on social messaging applications was proved in this work. The overall accuracy of the proposed approach in this paper was 63% which is a very promising result that can open the way for researchers to expand the idea of HDT. This was achieved by using Random Forest for clustering the question data base and matching new questions. K-nearest neighbor was also applied for sentiment analysis.Keywords: human digital twin, sentiment analysis, topic extraction, supervised machine learning, unsupervised machine learning, classification, clustering
Procedia PDF Downloads 872095 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 612094 An UHPLC (Ultra High Performance Liquid Chromatography) Method for the Simultaneous Determination of Norfloxacin, Metronidazole, and Tinidazole Using Monolithic Column-Stability Indicating Application
Authors: Asmaa Mandour, Ramzia El-Bagary, Asmaa El-Zaher, Ehab Elkady
Abstract:
Background: An UHPLC (ultra high performance liquid chromatography) method for the simultaneous determination of norfloxacin (NOR), metronidazole (MET) and tinidazole (TNZ) using monolithic column is presented. Purpose: The method is considered an environmentally friendly method with relatively low organic composition of the mobile phase. Methods: The chromatographic separation was performed using Phenomenex® Onyex Monolithic C18 (50mmx 20mm) column. An elution program of mobile phase consisted of 0.5% aqueous phosphoric acid : methanol (85:15, v/v). Where elution of all drugs was completed within 3.5 min with 1µL injection volume. The UHPLC method was applied for the stability indication of NOR in the presence of its acid degradation product ND. Results: Retention times were 0.69, 1.19 and 3.23 min for MET, TNZ and NOR, respectively. While ND retention time was 1.06 min. Linearity, accuracy, and precision were acceptable over the concentration range of 5-50µg mL-1for all drugs. Conclusions: The method is simple, sensitive and suitable for the routine quality control and dosage form assay of the three drugs and can also be used for the stability indication of NOR in the presence of its acid degradation product.Keywords: antibacterial, monolithic cilumn, simultaneous determination, UHPLC
Procedia PDF Downloads 2532093 A Hybrid Expert System for Generating Stock Trading Signals
Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour
Abstract:
In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.Keywords: fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange
Procedia PDF Downloads 3322092 Resource Creation Using Natural Language Processing Techniques for Malay Translated Qur'an
Authors: Nor Diana Ahmad, Eric Atwell, Brandon Bennett
Abstract:
Text processing techniques for English have been developed for several decades. But for the Malay language, text processing methods are still far behind. Moreover, there are limited resources, tools for computational linguistic analysis available for the Malay language. Therefore, this research presents the use of natural language processing (NLP) in processing Malay translated Qur’an text. As the result, a new language resource for Malay translated Qur’an was created. This resource will help other researchers to build the necessary processing tools for the Malay language. This research also develops a simple question-answer prototype to demonstrate the use of the Malay Qur’an resource for text processing. This prototype has been developed using Python. The prototype pre-processes the Malay Qur’an and an input query using a stemming algorithm and then searches for occurrences of the query word stem. The result produced shows improved matching likelihood between user query and its answer. A POS-tagging algorithm has also been produced. The stemming and tagging algorithms can be used as tools for research related to other Malay texts and can be used to support applications such as information retrieval, question answering systems, ontology-based search and other text analysis tasks.Keywords: language resource, Malay translated Qur'an, natural language processing (NLP), text processing
Procedia PDF Downloads 3182091 A Novel Way to Create Qudit Quantum Error Correction Codes
Authors: Arun Moorthy
Abstract:
Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.Keywords: qudit, error correction, quantum, qubit
Procedia PDF Downloads 1602090 The Artificial Intelligence (AI) Impact on Project Management: A Destructive or Transformative Agent
Authors: Kwame Amoah
Abstract:
Artificial intelligence (AI) has the prospect of transforming project management, significantly improving efficiency and accuracy. By automating specific tasks with defined guidelines, AI can assist project managers in making better decisions and allocating resources efficiently, with possible risk mitigation. This study explores how AI is already impacting project management and likely future AI's impact on the field. The AI's reaction has been a divided opinion; while others picture it as a destroyer of jobs, some welcome it as an innovation advocate. Both sides agree that AI will be disruptive and revolutionize PM's functions. If current research is to go by, AI or some form will replace one-third of all learning graduate PM jobs by as early as 2030. A recent survey indicates AI spending will reach $97.9 billion by the end of 2023. Considering such a profound impact, the project management profession will also see a paradigm shift driven by AI. The study examines what the project management profession will look like in the next 5-10 years after this technological disruption. The research methods incorporate existing literature, develop trend analysis, and conduct structured interviews with project management stakeholders from North America to gauge the trend. PM professionals can harness the power of AI, ensuring a smooth transition and positive outcomes. AI adoption will maximize benefits, minimize adverse consequences, and uphold ethical standards, leading to improved project performance.Keywords: project management, disruptive teacnologies, project management function, AL applications, artificial intelligence
Procedia PDF Downloads 832089 Effect of Testing Device Calibration on Liquid Limit Assessment
Authors: M. O. Bayram, H. B. Gencdal, N. O. Fercan, B. Basbug
Abstract:
Liquid limit, which is used as a measure of soil strength, can be detected by Casagrande and fall-cone testing methods. The two methods majorly diverge from each other in terms of operator dependency. The Casagrande method that is applied according to ASTM D4318-17 standards may give misleading results, especially if the calibration process is not performed well. To reveal the effect of calibration for drop height and amount of soil paste placement in the Casagrande cup, a series of tests were carried out by multipoint method as it is specified in the ASTM standards. The tests include the combination of 6 mm, 8 mm, 10 mm, and 12 mm drop heights and under-filled, half-filled, and full-filled Casagrande cups by kaolinite samples. It was observed that during successive tests, the drop height of the cup deteriorated; hence the device was recalibrated before and after each test to provide the accuracy of the results. Besides, the tests by under-filled and full-filled samples for higher drop heights revealed lower liquid limit values than the lower drop heights revealed. For the half-filled samples, it was clearly seen that the liquid limit values didn’t change at all as the drop height increased, and this explains the function of standard specifications.Keywords: calibration, casagrande cup method, drop height, kaolinite, liquid limit, placing form
Procedia PDF Downloads 1602088 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System
Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva
Abstract:
Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.Keywords: energy production, meteorological data, irradiance decomposition, solar photovoltaic system
Procedia PDF Downloads 1422087 Developing Artificial Neural Networks (ANN) for Falls Detection
Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai
Abstract:
The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold
Procedia PDF Downloads 4962086 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)
Procedia PDF Downloads 3452085 Parasitological Study and Its Role in Fisheries Management and Stock Assessment of Boops boops (Lineauses, 1758) along the Tunisian Coast
Authors: I. Chebbi, L. Boudaya, L. Neifar
Abstract:
The bogue, Boops boops is an economically important fishery resource and commonly captured in the Mediterranean, and its diversity in parasites has been used as a tool to differentiate between stocks along with Tunisia since it is widely acceptable in fisheries management. In this study, a total of 90 fish are investigated from three localities off Tunisia, including Kelibia, Mahdia, and Zarzis. Fifteen species of parasites totaling 1270 individuals were harvested from B. boops, whereas ten parasites were used as biological tags. Based on Mahalanobis distance, each parasite species shows a great importance in the discrimination between groups. Tetraphyllidea larvae are the most influential parasites in determining the position of samples belonging to Kelibia. Monogenean species and Hysterothylacium sp. are the most important species for determining the position of samples from Mahdia. Specimens from Zarzis are characterized by the absence of the four Monogenean species and the Tetraphyllidea larvae. Parasites allocate B. boops population correctly to their origin communities with an accuracy of 83.3%. These results were corroborated by the discriminant analyses, highlighted the presence of three stocks, and improved that the parasitological method can be considered as a reliable key to provide imperative information for discriminating among B. boops stocks in Tunisian waters.Keywords: biological marker, Boops boops, parasite, population structure
Procedia PDF Downloads 1342084 A Validated UPLC-MS/MS Assay Using Negative Ionization Mode for High-Throughput Determination of Pomalidomide in Rat Plasma
Authors: Muzaffar Iqbal, Essam Ezzeldin, Khalid A. Al-Rashood
Abstract:
Pomalidomide is a second generation oral immunomodulatory agent, being used for the treatment of multiple myeloma in patients with disease refractory to lenalidomide and bortezomib. In this study, a sensitive UPLC-MS/MS assay was developed and validated for high-throughput determination of pomalidomide in rat plasma using celecoxib as an internal standard (IS). Liquid liquid extraction using dichloromethane as extracting agent was employed to extract pomalidomide and IS from 200 µL of plasma. Chromatographic separation was carried on Acquity BEHTM C18 column (50 × 2.1 mm, 1.7 µm) using an isocratic mobile phase of acetonitrile:10 mM ammonium acetate (80:20, v/v), at a flow rate of 0.250 mL/min. Both pomalidomide and IS were eluted at 0.66 ± 0.03 and 0.80 ± 0.03 min, respectively with a total run time of 1.5 min only. Detection was performed on a triple quadrupole tandem mass spectrometer using electrospray ionization in negative mode. The precursor to product ion transitions of m/z 272.01 → 160.89 for pomalidomide and m/z 380.08 → 316.01 for IS were used to quantify them respectively, using multiple reaction monitoring mode. The developed method was validated according to regulatory guideline for bioanalytical method validation. The linearity in plasma sample was achieved in the concentration range of 0.47–400 ng/mL (r2 ≥ 0.997). The intra and inter-day precision values were ≤ 11.1% (RSD, %) whereas accuracy values ranged from - 6.8 – 8.5% (RE, %). In addition, other validation results were within the acceptance criteria and the method was successfully applied in a pharmacokinetic study of pomalidomide in rats.Keywords: pomalidomide, pharmacokinetics, LC-MS/MS, celecoxib
Procedia PDF Downloads 3912083 A Study on Reliability of Gender and Stature Determination by Odontometric and Craniofacial Anthropometric Parameters
Authors: Churamani Pokhrel, C. B. Jha, S. R. Niraula, P. R. Pokharel
Abstract:
Human identification is one of the most challenging subjects that man has confronted. The determination of adult sex and stature are two of the four key factors (sex, stature, age, and race) in identification of an individual. Craniofacial and odontometric parameters are important tools for forensic anthropologists when it is not possible to apply advanced techniques for identification purposes. The present study provides anthropometric correlation of the parameters with stature and gender and also devises regression formulae for reconstruction of stature. A total of 312 Nepalese students with equal distribution of sex i.e., 156 male and 156 female students of age 18-35 years were taken for the study. Total of 10 parameters were measured (age, sex, stature, head circumference, head length, head breadth, facial height, bi-zygomatic width, mesio-distal canine width and inter-canine distance of both maxilla and mandible). Co-relation and regression analysis was done to find the association between the parameters. All parameters were found to be greater in males than females and each was found to be statistically significant. Out of total 312 samples, the best regressor for the determination of stature was head circumference and mandibular inter-canine width and that for gender was head circumference and right mandibular teeth. The accuracy of prediction was 83%. Regression equations and analysis generated from craniofacial and odontometric parameters can be a supplementary approach for the estimation of stature and gender when extremities are not available.Keywords: craniofacial, gender, odontometric, stature
Procedia PDF Downloads 1912082 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 2182081 Modeling of Tool Flank Wear in Finish Hard Turning of AISI D2 Using Genetic Programming
Authors: V. Pourmostaghimi, M. Zadshakoyan
Abstract:
Efficiency and productivity of the finish hard turning can be enhanced impressively by utilizing accurate predictive models for cutting tool wear. However, the ability of genetic programming in presenting an accurate analytical model is a notable characteristic which makes it more applicable than other predictive modeling methods. In this paper, the genetic equation for modeling of tool flank wear is developed with the use of the experimentally measured flank wear values and genetic programming during finish turning of hardened AISI D2. Series of tests were conducted over a range of cutting parameters and the values of tool flank wear were measured. On the basis of obtained results, genetic model presenting connection between cutting parameters and tool flank wear were extracted. The accuracy of the genetically obtained model was assessed by using two statistical measures, which were root mean square error (RMSE) and coefficient of determination (R²). Evaluation results revealed that presented genetic model predicted flank wear over the study area accurately (R² = 0.9902 and RMSE = 0.0102). These results allow concluding that the proposed genetic equation corresponds well with experimental data and can be implemented in real industrial applications.Keywords: cutting parameters, flank wear, genetic programming, hard turning
Procedia PDF Downloads 1792080 Study of Launch Recovery Control Dynamics of Retro Propulsive Reusable Rockets
Authors: Pratyush Agnihotri
Abstract:
The space missions are very costly because the transportation to the space is highly expensive and therefore there is the need to achieve complete re-usability in our launch vehicles to make the missions highly economic by cost cutting of the material recovered. Launcher reusability is the most efficient approach to decreasing admittance to space access economy, however stays an incredible specialized hurdle for the aerospace industry. Major concern of the difficulties lies in guidance and control procedure and calculations, specifically for those of the controlled landing stage, which should empower an exact landing with low fuel edges. Although cutting edge ways for navigation and control are present viz hybrid navigation and robust control. But for powered descent and landing of first stage of launch vehicle the guidance control is need to enable on board optimization. At first the CAD model of the launch vehicle I.e. space x falcon 9 rocket is presented for better understanding of the architecture that needs to be identified for the guidance and control solution for the recovery of the launcher. The focus is on providing the landing phase guidance scheme for recovery and re usability of first stage using retro propulsion. After reviewing various GNC solutions, to achieve accuracy in pre requisite landing online convex and successive optimization are explored as the guidance schemes.Keywords: guidance, navigation, control, retro propulsion, reusable rockets
Procedia PDF Downloads 912079 Oxytocin and Sensorimotor Synchronization in Pairs of Strangers
Authors: Yana Gorina, Olga Lopatina, Elina Tsigeman, Larisa Mararitsa
Abstract:
The ability to act in concert with others, the so-called sensorimotor synchronisation, is a fundamental human ability that underlies successful interpersonal coordination. The manifestation of accuracy and plasticity in synchronisation is an adaptive aspect of interaction with the environment, as well as the ability to predict upcoming actions and behaviour of others. The ability to temporarily coordinate one’s actions with a predictable external event is manifested in such types of social behaviour as a synchronised group dance to music played live by an orchestra, group sports (rowing, swimming, etc.), synchronised actions of surgeons during an operation, applause from an admiring audience, walking rhythms, etc. Both our body and mind are involved in achieving the synchronisation during social interactions. However, it has not yet been well described how the brain determine the external rhythm and what neuropeptides coordinate and synchronise actions. Over the past few decades, there has been an increased interest among neuroscientists and neurophysiologists regarding the neuropeptide oxytocin in the context of its complex, diverse and sometimes polar effects manifested in the emotional and social aspects of behaviour (attachment, trust, empathy, emotion recognition, stress response, anxiety and depression, etc.). Presumable, oxytocin might also be involved in social synchronisation processes. The aim of our study is to test the hypothesis that oxytocin is linked to interpersonal synchronisation in a pair of strangers.Keywords: behavior, movement, oxytocin, synchronization
Procedia PDF Downloads 622078 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units
Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani
Abstract:
There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation
Procedia PDF Downloads 4162077 Total-Reflection X-Ray Spectroscopy as a Tool for Element Screening in Food Samples
Authors: Hagen Stosnach
Abstract:
The analytical demands on modern instruments for element analysis in food samples include the analysis of major, trace and ultra-trace essential elements as well as potentially toxic trace elements. In this study total reflection, X-ray fluorescence analysis (TXRF) is presented as an analytical technique, which meets the requirements, defined by the Association of Official Agricultural Chemists (AOAC) regarding the limit of quantification, repeatability, reproducibility and recovery for most of the target elements. The advantages of TXRF are the small sample mass required, the broad linear range from µg/kg up to wt.-% values, no consumption of gases or cooling water, and the flexible and easy sample preparation. Liquid samples like alcoholic or non-alcoholic beverages can be analyzed without any preparation. For solid food samples, the most common sample pre-treatment methods are mineralization, direct deposition of the sample onto the reflector without/with minimal treatment, mainly as solid suspensions or after extraction. The main disadvantages are due to the possible peaks overlapping, which may lower the accuracy of quantitative analysis and the limit in the element identification. This analytical technique will be presented by several application examples, covering a broad range of liquid and solid food types.Keywords: essential elements, toxic metals, XRF, spectroscopy
Procedia PDF Downloads 1332076 B Spline Finite Element Method for Drifted Space Fractional Tempered Diffusion Equation
Authors: Ayan Chakraborty, BV. Rathish Kumar
Abstract:
Off-late many models in viscoelasticity, signal processing or anomalous diffusion equations are formulated in fractional calculus. Tempered fractional calculus is the generalization of fractional calculus and in the last few years several important partial differential equations occurring in the different field of science have been reconsidered in this term like diffusion wave equations, Schr$\ddot{o}$dinger equation and so on. In the present paper, a time-dependent tempered fractional diffusion equation of order $\gamma \in (0,1)$ with forcing function is considered. Existence, uniqueness, stability, and regularity of the solution has been proved. Crank-Nicolson discretization is used in the time direction. B spline finite element approximation is implemented. Generally, B-splines basis are useful for representing the geometry of a finite element model, interfacing a finite element analysis program. By utilizing this technique a priori space-time estimate in finite element analysis has been derived and we proved that the convergent order is $\mathcal{O}(h²+T²)$ where $h$ is the space step size and $T$ is the time. A couple of numerical examples have been presented to confirm the accuracy of theoretical results. Finally, we conclude that the studied method is useful for solving tempered fractional diffusion equations.Keywords: B-spline finite element, error estimates, Gronwall's lemma, stability, tempered fractional
Procedia PDF Downloads 1922075 Numerical Design and Characterization of MOVPE Grown Nitride Based Semiconductors
Authors: J. Skibinski, P. Caban, T. Wejrzanowski, K. J. Kurzydlowski
Abstract:
In the present study numerical simulations of epitaxial growth of gallium nitride in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S are addressed. The aim of this study was to design the optimal fluid flow and thermal conditions for obtaining the most homogeneous product. Since there are many agents influencing reactions on the crystal growth area such as temperature, pressure, gas flow or reactor geometry, it is difficult to design optimal process. Variations of process pressure and hydrogen mass flow rates have been considered. According to the fact that it’s impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during crystal growth, detailed 3D modeling has been used to get an insight of the process conditions. Numerical simulations allow to understand the epitaxial process by calculation of heat and mass transfer distribution during growth of gallium nitride. Including chemical reactions in the numerical model allows to calculate the growth rate of the substrate. The present approach has been applied to enhance the performance of AIX-200/4RF-S reactor.Keywords: computational fluid dynamics, finite volume method, epitaxial growth, gallium nitride
Procedia PDF Downloads 4542074 Predictive Analytics of Student Performance Determinants
Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi
Abstract:
Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.Keywords: student performance, supervised machine learning, classification, cross-validation, prediction
Procedia PDF Downloads 1262073 About Multi-Resolution Techniques for Large Eddy Simulation of Reactive Multi-Phase Flows
Authors: Giacomo Rossi, Bernardo Favini, Eugenio Giacomazzi, Franca Rita Picchia, Nunzio Maria Salvatore Arcidiacono
Abstract:
A numerical technique for mesh refinement in the HeaRT (Heat Release and Transfer) numerical code is presented. In the CFD framework, Large Eddy Simulation (LES) approach is gaining in importance as a tool for simulating turbulent combustion processes, also if this approach has an high computational cost due to the complexity of the turbulent modeling and the high number of grid points necessary to obtain a good numerical solution. In particular, when a numerical simulation of a big domain is performed with a structured grid, the number of grid points can increase so much that the simulation becomes impossible: this problem can be overcame with a mesh refinement technique. Mesh refinement technique developed for HeaRT numerical code (a staggered finite difference code) is based on an high order reconstruction of the variables at the grid interfaces by means of a least square quasi-ENO interpolation: numerical code is written in modern Fortran (2003 standard of newer) and is parallelized using domain decomposition and message passing interface (MPI) standard.Keywords: LES, multi-resolution, ENO, fortran
Procedia PDF Downloads 3662072 Performance Evaluation of 3D Printed ZrO₂ Ceramic Components by Nanoparticle Jetting™
Authors: Shengping Zhong, Qimin Shi, Yaling Deng, Shoufeng Yang
Abstract:
Additive manufacturing has exerted a tremendous fascination on the development of the manufacturing and materials industry in the past three decades. Zirconia-based advanced ceramic has been poured substantial attention in the interest of structural and functional ceramics. As a novel material jetting process for selectively depositing nanoparticles, NanoParticle Jetting™ is capable of fabricating dense zirconia components with a high-detail surface, precisely controllable shrinkage, and remarkable mechanical properties. The presence of NPJ™ gave rise to a higher elevation regarding the printing process and printing accuracy. Emphasis is placed on the performance evaluation of NPJ™ printed ceramic components by which the physical, chemical, and mechanical properties are evaluated. The experimental results suggest the Y₂O₃-stabilized ZrO₂ boxes exhibit a high relative density of 99.5%, glossy surface of minimum 0.33 µm, general linear shrinkage factor of 17.47%, outstanding hardness and fracture toughness of 12.43±0.09 GPa and 7.52±0.34 MPa·m¹/², comparable flexural strength of 699±104 MPa, and dense and homogeneous grain distribution of microstructure. This innovative NanoParticle Jetting system manifests an overwhelming potential in dental, medical, and electronic applications.Keywords: nanoparticle jetting, ZrO₂ ceramic, materials jetting, performance evaluation
Procedia PDF Downloads 177