Search results for: axial flux induction machine
3701 Microscopic Analysis of Bulk, High-Tc Superconductors by Transmission Kikuchi Diffraction
Authors: Anjela Koblischka-Veneva, Michael R. Koblischka
Abstract:
In this contribution, the Transmission-Kikuchi Diffraction (TKD, or sometimes called t-EBSD) is applied to bulk, melt-grown YBa₂Cu₃O₇ (YBCO) superconductors prepared by the MTMG (melt-textured melt-grown) technique and the infiltration growth (IG) technique. TEM slices required for the analysis were prepared by means of Focused Ion-Beam (FIB) milling using mechanically polished sample surfaces, which enable a proper selection of the interesting regions for investigations. The required optical transparency was reached by an additional polishing step of the resulting surfaces using FIB-Ga-ion and Ar-ion milling. The improved spatial resolution of TKD enabled the investigation of the tiny YBa₂Cu₃O₅ (Y-211) particles having a diameter of about 50-100 nm embedded within the YBCO matrix and of other added secondary phase particles. With the TKD technique, the microstructural properties of the YBCO matrix are studied in detail. It is observed that the matrix shows the effects of stress/strain, depending on the size and distribution of the embedded particles, which are important for providing additional flux pinning centers in such superconducting bulk samples. Using the Kernel Average Misorientation (KAM) maps, the strain induced in the superconducting matrix around the particles, which increases the flux pinning effectivity, can be clearly revealed. This type of analysis of the EBSD/TKD data is, therefore, also important for other material systems, where nanoparticles are embedded in a matrix.Keywords: transmission Kikuchi diffraction, EBSD, TKD, embedded particles, superconductors YBa₂Cu₃O₇
Procedia PDF Downloads 1353700 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity
Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido
Abstract:
Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens
Procedia PDF Downloads 2893699 Fraud Detection in Credit Cards with Machine Learning
Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf
Abstract:
Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine
Procedia PDF Downloads 1493698 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality
Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas
Abstract:
Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy
Procedia PDF Downloads 3273697 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 2043696 Axial, Bending Interaction Diagrams of Reinforced Concrete Columns Exposed to Chloride Attack
Authors: Rita Greco, Giuseppe Carlo Marano
Abstract:
Chloride induced reinforcement corrosion is widely accepted to be the most frequent mechanism causing premature degradation of reinforced concrete members, whose economic and social consequences are growing up continuously. Prevention of these phenomena has a great importance in structural design, and modern Codes and Standard impose prescriptions concerning design details and concrete mix proportion for structures exposed to different external aggressive conditions, grouped in environmental classes. This paper focuses on reinforced concrete columns load carrying capacity degradation over time due to chloride induced steel pitting corrosion. The structural element is considered to be exposed to marine environment and the effects of corrosion are described by the time degradation of the axial-bending interaction diagram. Because chlorides ingress and consequent pitting corrosion propagation are both time-dependent mechanisms, the study adopts a time-variant predictive approach to evaluate the residual strength of corroded reinforced concrete columns at different lifetimes. Corrosion initiation and propagation process is modelled by taking into account all the parameters, such as external environmental conditions, concrete mix proportion, concrete cover and so on, which influence the time evolution of the corrosion phenomenon and its effects on the residual strength of RC columns.Keywords: pitting corrosion, strength deterioration, diffusion coefficient, surface chloride concentration, concrete structures, marine environment
Procedia PDF Downloads 3233695 Grating Scale Thermal Expansion Error Compensation for Large Machine Tools Based on Multiple Temperature Detection
Authors: Wenlong Feng, Zhenchun Du, Jianguo Yang
Abstract:
To decrease the grating scale thermal expansion error, a novel method which based on multiple temperature detections is proposed. Several temperature sensors are installed on the grating scale and the temperatures of these sensors are recorded. The temperatures of every point on the grating scale are calculated by interpolating between adjacent sensors. According to the thermal expansion principle, the grating scale thermal expansion error model can be established by doing the integral for the variations of position and temperature. A novel compensation method is proposed in this paper. By applying the established error model, the grating scale thermal expansion error is decreased by 90% compared with no compensation. The residual positioning error of the grating scale is less than 15um/10m and the accuracy of the machine tool is significant improved.Keywords: thermal expansion error of grating scale, error compensation, machine tools, integral method
Procedia PDF Downloads 3683694 Numerical Investigation of Tsunami Flow Characteristics and Energy Reduction through Flexible Vegetation
Authors: Abhishek Mukherjee, Juan C. Cajas, Jenny Suckale, Guillaume Houzeaux, Oriol Lehmkuhl, Simone Marras
Abstract:
The investigation of tsunami flow characteristics and the quantification of tsunami energy reduction through the coastal vegetation is important to understand the protective benefits of nature-based mitigation parks. In the present study, a three-dimensional non-hydrostatic incompressible Computational Fluid Dynamics model with a two-way coupling enabled fluid-structure interaction approach (FSI) is used. After validating the numerical model against experimental data, tsunami flow characteristics have been investigated by varying vegetation density, modulus of elasticity, the gap between stems, and arrangement or distribution of vegetation patches. Streamwise depth average velocity profiles, turbulent kinetic energy, energy flux reflection, and dissipation extracted by the numerical study will be presented in this study. These diagnostics are essential to assess the importance of different parameters to design the proper coastal defense systems. When a tsunami wave reaches the shore, it transforms into undular bores, which induce scour around offshore structures and sediment transport. The bed shear stress, instantaneous turbulent kinetic energy, and the vorticity near-bed will be presented to estimate the importance of vegetation to prevent tsunami-induced scour and sediment transport.Keywords: coastal defense, energy flux, fluid-structure interaction, natural hazards, sediment transport, tsunami mitigation
Procedia PDF Downloads 1503693 Regression Model Evaluation on Depth Camera Data for Gaze Estimation
Authors: James Purnama, Riri Fitri Sari
Abstract:
We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python
Procedia PDF Downloads 5393692 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 1853691 On the Influence of Thermal Radiation Upon Heat Transfer Characteristics of a Porous Media Under Local Thermal Non-Equilibrium Condition
Authors: Yasser Mahmoudi, Nader Karimi
Abstract:
The present work investigates numerically the effect of thermal radiation from the solid phase on the rate of heat transfer inside a porous medium. Forced convection heat transfer process within a pipe filled with a porous media is considered. The Darcy-Brinkman-Forchheimer model is utilized to represent the fluid transport within the porous medium. A local thermal non-equilibrium (LTNE), two-equation model is used to represent the energy transport for the solid and fluid phases. The radiative heat transfer equation is solved by discrete ordinate method (DOM) to compute the radiative heat flux in the porous medium. Two primary approaches (models A and B) are used to represent the boundary conditions for constant wall heat flux. The effects of radiative heat transfer on the Nusselt numbers of the two phases are examined by comparing the results obtained by the application of models A and B. The fluid Nusselt numbers calculated by the application of models A and B show that the Nusselt number obtained by model A for the radiative case is higher than those predicted for the non-radiative case. However, for model B the fluid Nusselt numbers obtained for the radiative and non-radiative cases are similar.Keywords: porous media, local thermal non-equilibrium, forced convection heat transfer, thermal radiation, Discrete Ordinate Method (DOM)
Procedia PDF Downloads 3253690 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 543689 Measuring Greenhouse Gas Exchange from Paddy Field Using Eddy Covariance Method in Mekong Delta, Vietnam
Authors: Vu H. N. Khue, Marian Pavelka, Georg Jocher, Jiří Dušek, Le T. Son, Bui T. An, Ho Q. Bang, Pham Q. Huong
Abstract:
Agriculture is an important economic sector of Vietnam, the most popular of which is wet rice cultivation. These activities are also known as the main contributor to the national greenhouse gas. In order to understand more about greenhouse gas exchange in these activities and to investigate the factors influencing carbon cycling and sequestration in these types of ecosystems, since 2019, the first eddy covariance station has been installed in a paddy field in Long An province, Mekong Delta. The station was equipped with state-of-the-art equipment for CO₂ and CH₄ gas exchange and micrometeorology measurements. In this study, data from the station was processed following the ICOS recommendations (Integrated Carbon Observation System) standards for CO₂, while CH₄ was manually processed and gap-filled using a random forest model from methane-gapfill-ml, a machine learning package, as there is no standard method for CH₄ flux gap-filling yet. Finally, the carbon equivalent (Ce) balance based on CO₂ and CH₄ fluxes was estimated. The results show that in 2020, even though a new water management practice - alternate wetting and drying - was applied to reduce methane emissions, the paddy field released 928 g Cₑ.m⁻².yr⁻¹, and in 2021, it was reduced to 707 g Cₑ.m⁻².yr⁻¹. On a provincial level, rice cultivation activities in Long An, with a total area of 498,293 ha, released 4.6 million tons of Cₑ in 2020 and 3.5 million tons of Cₑ in 2021.Keywords: eddy covariance, greenhouse gas, methane, rice cultivation, Mekong Delta
Procedia PDF Downloads 1423688 Improvement of Microstructure, Wear and Mechanical Properties of Modified G38NiCrMo8-4-4 Steel Used in Mining Industry
Authors: Mustafa Col, Funda Gul Koc, Merve Yangaz, Eylem Subasi, Can Akbasoglu
Abstract:
G38NiCrMo8-4-4 steel is widely used in mining industries, machine parts, gears due to its high strength and toughness properties. In this study, microstructure, wear and mechanical properties of G38NiCrMo8-4-4 steel modified with boron used in the mining industry were investigated. For this purpose, cast materials were alloyed by melting in an induction furnace to include boron with the rates of 0 ppm, 15 ppm, and 50 ppm (wt.) and were formed in the dimensions of 150x200x150 mm by casting into the sand mould. Homogenization heat treatment was applied to the specimens at 1150˚C for 7 hours. Then all specimens were austenitized at 930˚C for 1 hour, quenched in the polymer solution and tempered at 650˚C for 1 hour. Microstructures of the specimens were investigated by using light microscope and SEM to determine the effect of boron and heat treatment conditions. Changes in microstructure properties and material hardness were obtained due to increasing boron content and heat treatment conditions after microstructure investigations and hardness tests. Wear tests were carried out using a pin-on-disc tribometer under dry sliding conditions. Charpy V notch impact test was performed to determine the toughness properties of the specimens. Fracture and worn surfaces were investigated with scanning electron microscope (SEM). The results show that boron element has a positive effect on the hardness and wear properties of G38NiCrMo8-4-4 steel.Keywords: G38NiCrMo8-4-4 steel, boron, heat treatment, microstructure, wear, mechanical properties
Procedia PDF Downloads 1963687 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 1333686 Assessment of Ultra-High Cycle Fatigue Behavior of EN-GJL-250 Cast Iron Using Ultrasonic Fatigue Testing Machine
Authors: Saeedeh Bakhtiari, Johannes Depessemier, Stijn Hertelé, Wim De Waele
Abstract:
High cycle fatigue comprising up to 107 load cycles has been the subject of many studies, and the behavior of many materials was recorded adequately in this regime. However, many applications involve larger numbers of load cycles during the lifetime of machine components. In this ultra-high cycle regime, other failure mechanisms play, and the concept of a fatigue endurance limit (assumed for materials such as steel) is often an oversimplification of reality. When machine component design demands a high geometrical complexity, cast iron grades become interesting candidate materials. Grey cast iron is known for its low cost, high compressive strength, and good damping properties. However, the ultra-high cycle fatigue behavior of cast iron is poorly documented. The current work focuses on the ultra-high cycle fatigue behavior of EN-GJL-250 (GG25) grey cast iron by developing an ultrasonic (20 kHz) fatigue testing system. Moreover, the testing machine is instrumented to measure the temperature and the displacement of the specimen, and to control the temperature. The high resonance frequency allowed to assess the behavior of the cast iron of interest within a matter of days for ultra-high numbers of cycles, and repeat the tests to quantify the natural scatter in fatigue resistance.Keywords: GG25, cast iron, ultra-high cycle fatigue, ultrasonic test
Procedia PDF Downloads 1773685 Design Optimization of Doubly Fed Induction Generator Performance by Differential Evolution
Authors: Mamidi Ramakrishna Rao
Abstract:
Doubly-fed induction generators (DFIG) due to their advantages like speed variation and four-quadrant operation, find its application in wind turbines. DFIG besides supplying power to the grid has to support reactive power (kvar) under grid voltage variations, should contribute minimum fault current during faults, have high efficiency, minimum weight, adequate rotor protection during crow-bar-operation from +20% to -20% of rated speed. To achieve the optimum performance, a good electromagnetic design of DFIG is required. In this paper, a simple and heuristic global optimization – Differential Evolution has been used. Variables considered are lamination details such as slot dimensions, stack diameters, air gap length, and generator stator and rotor stack length. Two operating conditions have been considered - voltage and speed variations. Constraints included were reactive power supplied to the grid and limiting fault current and torque. The optimization has been executed separately for three objective functions - maximum efficiency, weight reduction, and grid fault stator currents. Subsequent calculations led to the conclusion that designs determined through differential evolution help in determining an optimum electrical design for each objective function.Keywords: design optimization, performance, DFIG, differential evolution
Procedia PDF Downloads 1503684 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 123683 Evaluating the Seismic Stress Distribution in the High-Rise Structures Connections with Optimal Bracing System
Authors: H. R. Vosoughifar, Seyedeh Zeinab. Hosseininejad, Nahid Shabazi, Seyed Mohialdin Hosseininejad
Abstract:
In recent years, structure designers advocate further application of energy absorption devices for lateral loads damping. The Un-bonded Braced Frame (UBF) system is one of the efficient damping systems, which is made of a smart combination of steel and concrete or mortar. In this system, steel bears the earthquake-induced axial force as compressive or tension forces without loss of strength. Concrete or mortar around the steel core acts as a constraint for brace and prevents brace buckling during seismic axial load. In this study, the optimal bracing system in the high-rise structures has been evaluated considering the seismic stress distribution in the connections. An actual 18-story structure was modeled using the proper Finite Element (FE) software where braced with UBF, Eccentrically Braced Frames (EBF) and Concentrically Braced Frame (CBF) systems. Nonlinear static pushover and time-history analyses are then performed so that the acquired results demonstrate that the UBF system reduces drift values in the high-rise buildings. Further statistical analyses show that there is a significant difference between the drift values of UBF system compared with those resulted from the EBF and CBF systems. Hence, the seismic stress distribution in the connections of the proposed structure which braced with UBF system was investigated.Keywords: optimal bracing system, high-rise structure, finite element analysis (FEA), seismic stress
Procedia PDF Downloads 4303682 Machine Learning for Targeting of Conditional Cash Transfers: Improving the Effectiveness of Proxy Means Tests to Identify Future School Dropouts and the Poor
Authors: Cristian Crespo
Abstract:
Conditional cash transfers (CCTs) have been targeted towards the poor. Thus, their targeting assessments check whether these schemes have been allocated to low-income households or individuals. However, CCTs have more than one goal and target group. An additional goal of CCTs is to increase school enrolment. Hence, students at risk of dropping out of school also are a target group. This paper analyses whether one of the most common targeting mechanisms of CCTs, a proxy means test (PMT), is suitable to identify the poor and future school dropouts. The PMT is compared with alternative approaches that use the outputs of a predictive model of school dropout. This model was built using machine learning algorithms and rich administrative datasets from Chile. The paper shows that using machine learning outputs in conjunction with the PMT increases targeting effectiveness by identifying more students who are either poor or future dropouts. This joint targeting approach increases effectiveness in different scenarios except when the social valuation of the two target groups largely differs. In these cases, the most likely optimal approach is to solely adopt the targeting mechanism designed to find the highly valued group.Keywords: conditional cash transfers, machine learning, poverty, proxy means tests, school dropout prediction, targeting
Procedia PDF Downloads 2053681 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management
Authors: Thewodros K. Geberemariam
Abstract:
The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space
Procedia PDF Downloads 1523680 Design of Fuzzy Logic Based Global Power System Stabilizer for Dynamic Stability Enhancement in Multi-Machine Power System
Authors: N. P. Patidar, J. Earnest, Laxmikant Nagar, Akshay Sharma
Abstract:
This paper describes the diligence of a new input signal based fuzzy power system stabilizer in multi-machine power system. Instead of conventional input pairs like speed deviation (∆ω) and derivative of speed deviation i.e. acceleration (∆ω ̇) or speed deviation and accelerating power deviation of each machine, in this paper, deviation of active power through the tie line colligating two areas is used as one of the inputs to the fuzzy logic controller in concurrence with the speed deviation. Fuzzy Logic has the features of simple concept, easy effectuation, and computationally efficient. The advantage of this input is that, the same signal can be fed to each of the fuzzy logic controller connected with each machine. The simulated system comprises of two fully symmetrical areas coupled together by two 230 kV lines. Each area is equipped with two superposable generators rated 20 kV/900MVA and area-1 is exporting 413 MW to area-2. The effectiveness of the proposed control scheme has been assessed by performing small signal stability assessment and transient stability assessment. The proposed control scheme has been compared with a conventional PSS. Digital simulation is used to demonstrate the performance of fuzzy logic controller.Keywords: Power System Stabilizer (PSS), small signal stability, inter-area oscillation, fuzzy logic controller, membership function, rule base
Procedia PDF Downloads 5333679 Design and Implementation of a Software Platform Based on Artificial Intelligence for Product Recommendation
Authors: Giuseppina Settanni, Antonio Panarese, Raffaele Vaira, Maurizio Galiano
Abstract:
Nowdays, artificial intelligence is used successfully in academia and industry for its ability to learn from a large amount of data. In particular, in recent years the use of machine learning algorithms in the field of e-commerce has spread worldwide. In this research study, a prototype software platform was designed and implemented in order to suggest to users the most suitable products for their needs. The platform includes a chatbot and a recommender system based on artificial intelligence algorithms that provide suggestions and decision support to the customer. The recommendation systems perform the important function of automatically filtering and personalizing information, thus allowing to manage with the IT overload to which the user is exposed on a daily basis. Recently, international research has experimented with the use of machine learning technologies with the aim to increase the potential of traditional recommendation systems. Specifically, support vector machine algorithms have been implemented combined with natural language processing techniques that allow the user to interact with the system, express their requests and receive suggestions. The interested user can access the web platform on the internet using a computer, tablet or mobile phone, register, provide the necessary information and view the products that the system deems them most appropriate. The platform also integrates a dashboard that allows the use of the various functions, which the platform is equipped with, in an intuitive and simple way. Artificial intelligence algorithms have been implemented and trained on historical data collected from user browsing. Finally, the testing phase allowed to validate the implemented model, which will be further tested by letting customers use it.Keywords: machine learning, recommender system, software platform, support vector machine
Procedia PDF Downloads 1343678 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling
Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer
Abstract:
The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor
Procedia PDF Downloads 2833677 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 1443676 Development of Automatic Laser Scanning Measurement Instrument
Authors: Chien-Hung Liu, Yu-Fen Chen
Abstract:
This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW
Procedia PDF Downloads 3603675 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1093674 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 1643673 Hybrid Approach for Country’s Performance Evaluation
Authors: C. Slim
Abstract:
This paper presents an integrated model, which hybridized data envelopment analysis (DEA) and support vector machine (SVM) together, to class countries according to their efficiency and performance. This model takes into account aspects of multi-dimensional indicators, decision-making hierarchy and relativity of measurement. Starting from a set of indicators of performance as exhaustive as possible, a process of successive aggregations has been developed to attain an overall evaluation of a country’s competitiveness.Keywords: Artificial Neural Networks (ANN), Support vector machine (SVM), Data Envelopment Analysis (DEA), Aggregations, indicators of performance
Procedia PDF Downloads 3403672 Behavior of Composite Reinforced Concrete Circular Columns with Glass Fiber Reinforced Polymer I-Section
Authors: Hiba S. Ahmed, Abbas A. Allawi, Riyadh A. Hindi
Abstract:
Pultruded materials made of fiber-reinforced polymer (FRP) come in a broad range of shapes, such as bars, I-sections, C-sections, and other structural sections. These FRP materials are starting to compete with steel as structural materials because of their great resistance, low self-weight, and cheap maintenance costs-especially in corrosive conditions. This study aimed to evaluate the effectiveness of Glass Fiber Reinforced Polymer (GFRP) of the hybrid columns built by combining (GFRP) profiles with concrete columns because of their low cost and high structural efficiency. To achieve the aims of this study, nine circular columns with a diameter of (150 mm) and a height of (1000mm) were cast using normal concrete with compression strength equal to (35 MPa). The research involved three different types of reinforcement: hybrid circular columns type (IG) with GFRP I-section and 1% of the reinforcement ratio of steel bars, hybrid circular columns type (IS) with steel I-section and 1% of the reinforcement ratio of steel bars, (where the cross-section area of I-section for GFRP and steel was the same), compared with reference column (R) without I-section. To investigate the ultimate capacity, axial and lateral deformation, strain in longitudinal and transverse reinforcement, and failure mode of the circular column under different loading conditions (concentric and eccentric) with eccentricities of 25 mm and 50 mm, respectively. In the second part, an analytical finite element model will be performed using ABAQUS software to validate the experimental results.Keywords: composite, columns, reinforced concrete, GFRP, axial load
Procedia PDF Downloads 57