Search results for: genome-scale metabolic model
14991 Airplane Stability during Climb/Descend Phase Using a Flight Dynamics Simulation
Authors: Niloufar Ghoreishi, Ali Nekouzadeh
Abstract:
The stability of the flight during maneuvering and in response to probable perturbations is one of the most essential features of an aircraft that should be analyzed and designed for. In this study, we derived the non-linear governing equations of aircraft dynamics during the climb/descend phase and simulated a model aircraft. The corresponding force and moment dimensionless coefficients of the model and their variations with elevator angle and other relevant aerodynamic parameters were measured experimentally. The short-period mode and phugoid mode response were simulated by solving the governing equations numerically and then compared with the desired stability parameters for the particular level, category, and class of the aircraft model. To meet the target stability, a controller was designed and used. This resulted in significant improvement in the stability parameters of the flight.Keywords: flight stability, phugoid mode, short period mode, climb phase, damping coefficient
Procedia PDF Downloads 17114990 Data-Driven Approach to Predict Inpatient's Estimated Discharge Date
Authors: Ayliana Dharmawan, Heng Yong Sheng, Zhang Xiaojin, Tan Thai Lian
Abstract:
To facilitate discharge planning, doctors are presently required to assign an Estimated Discharge Date (EDD) for each patient admitted to the hospital. This assignment of the EDD is largely based on the doctor’s judgment. This can be difficult for cases which are complex or relatively new to the doctor. It is hypothesized that a data-driven approach would be able to facilitate the doctors to make accurate estimations of the discharge date. Making use of routinely collected data on inpatient discharges between January 2013 and May 2016, a predictive model was developed using machine learning techniques to predict the Length of Stay (and hence the EDD) of inpatients, at the point of admission. The predictive performance of the model was compared to that of the clinicians using accuracy measures. Overall, the best performing model was found to be able to predict EDD with an accuracy improvement in Average Squared Error (ASE) by -38% as compared to the first EDD determined by the present method. It was found that important predictors of the EDD include the provisional diagnosis code, patient’s age, attending doctor at admission, medical specialty at admission, accommodation type, and the mean length of stay of the patient in the past year. The predictive model can be used as a tool to accurately predict the EDD.Keywords: inpatient, estimated discharge date, EDD, prediction, data-driven
Procedia PDF Downloads 17414989 Parameters of Main Stage of Discharge between Artificial Charged Aerosol Cloud and Ground in Presence of Model Hydrometeor Arrays
Authors: D. S. Zhuravkova, A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, I. Y. Kalugina, N. Y. Lysov, A.V. Orlov
Abstract:
Investigation of the discharges from the artificial charged water aerosol clouds in presence of the arrays of the model hydrometeors could help to receive the new data about the peculiarities of the return stroke formation between the thundercloud and the ground when the large volumes of the hail particles participate in the lightning discharge initiation and propagation stimulation. Artificial charged water aerosol clouds of the negative or positive polarity with the potential up to one million volts have been used. Hail has been simulated by the group of the conductive model hydrometeors of the different form. Parameters of the impulse current of the main stage of the discharge between the artificial positively and negatively charged water aerosol clouds and the ground in presence of the model hydrometeors array and of its corresponding electromagnetic radiation have been determined. It was established that the parameters of the array of the model hydrometeors influence on the parameters of the main stage of the discharge between the artificial thundercloud cell and the ground. The maximal values of the main stage current impulse parameters and the electromagnetic radiation registered by the plate antennas have been found for the array of the model hydrometeors of the cylinder revolution form for the negatively charged aerosol cloud and for the array of the hydrometeors of the plate rhombus form for the positively charged aerosol cloud, correspondingly. It was found that parameters of the main stage of the discharge between the artificial charged water aerosol cloud and the ground in presence of the model hydrometeor array of the different considered forms depend on the polarity of the artificial charged aerosol cloud. In average, for all forms of the investigated model hydrometeors arrays, the values of the amplitude and the current rise of the main stage impulse current and the amplitude of the corresponding electromagnetic radiation for the artificial charged aerosol cloud of the positive polarity were in 1.1-1.9 times higher than for the charged aerosol cloud of the negative polarity. Thus, the received results could indicate to the possible more important role of the big volumes of the large hail arrays in the thundercloud on the parameters of the return stroke for the positive lightning.Keywords: main stage of discharge, hydrometeor form, lightning parameters, negative and positive artificial charged aerosol cloud
Procedia PDF Downloads 25614988 Stress Analysis of Water Wall Tubes of a Coal-fired Boiler during Soot Blowing Operation
Authors: Pratch Kittipongpattana, Thongchai Fongsamootr
Abstract:
This research aimed to study the influences of a soot blowing operation and geometrical variables to the stress characteristic of water wall tubes located in soot blowing areas which caused the boilers of Mae Moh power plant to lose their generation hour. The research method is divided into 2 parts (a) measuring the strain on water wall tubes by using 3-element rosette strain gages orientation during a full capacity plant operation and in periods of soot blowing operations (b) creating a finite element model in order to calculate stresses on tubes and validating the model by using experimental data in a steady state plant operation. Then, the geometrical variables in the model were changed to study stresses on the tubes. The results revealed that the stress was not affected by the soot blowing process and the finite element model gave the results 1.24% errors from the experiment. The geometrical variables influenced the stress, with the most optimum tubes design in this research reduced the average stress from the present design 31.28%.Keywords: boiler water wall tube, finite element, stress analysis, strain gage rosette
Procedia PDF Downloads 38914987 Evaluation of DNA Oxidation and Chemical DNA Damage Using Electrochemiluminescent Enzyme/DNA Microfluidic Array
Authors: Itti Bist, Snehasis Bhakta, Di Jiang, Tia E. Keyes, Aaron Martin, Robert J. Forster, James F. Rusling
Abstract:
DNA damage from metabolites of lipophilic drugs and pollutants, generated by enzymes, represents a major toxicity pathway in humans. These metabolites can react with DNA to form either 8-oxo-7,8-dihydro-2-deoxyguanosine (8-oxodG), which is the oxidative product of DNA or covalent DNA adducts, both of which are genotoxic and hence considered important biomarkers to detect cancer in humans. Therefore, detecting reactions of metabolites with DNA is an effective approach for the safety assessment of new chemicals and drugs. Here we describe a novel electrochemiluminescent (ECL) sensor array which can detect DNA oxidation and chemical DNA damage in a single array, facilitating a more accurate diagnostic tool for genotoxicity screening. Layer-by-layer assembly of DNA and enzyme are assembled on the pyrolytic graphite array which is housed in a microfluidic device for sequential detection of two type of the DNA damages. Multiple enzyme reactions are run on test compounds using the array, generating toxic metabolites in situ. These metabolites react with DNA in the films to cause DNA oxidation and chemical DNA damage which are detected by ECL generating osmium compound and ruthenium polymer, respectively. The method is further validated by the formation of 8-oxodG and DNA adduct using similar films of DNA/enzyme on magnetic bead biocolloid reactors, hydrolyzing the DNA, and analyzing by liquid chromatography-mass spectrometry (LC-MS). Hence, this combined DNA/enzyme array/LC-MS approach can efficiently explore metabolic genotoxic pathways for drugs and environmental chemicals.Keywords: biosensor, electrochemiluminescence, DNA damage, microfluidic array
Procedia PDF Downloads 36814986 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 38414985 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 17114984 A Model for Academic Coaching for Success and Inclusive Excellence in Science, Technology, Engineering, and Mathematics Education
Authors: Sylvanus N. Wosu
Abstract:
Research shows that factors, such as low motivation, preparation, resources, emotional and social integration, and fears of risk-taking, are the most common barriers to access, matriculation, and retention into science, technology, engineering, and mathematics (STEM) disciplines for underrepresented (URM) students. These factors have been shown to impact students’ attraction and success in STEM fields. Standardized tests such as the SAT and ACT often used as predictor of success, are not always true predictors of success for African and Hispanic American students. Without an adequate academic support environment, even a high SAT score does not guarantee academic success in science and engineering. This paper proposes a model for Academic Coaching for building success and inclusive excellence in STEM education. Academic coaching is framed as a process of motivating students to be independent learners through relational mentorship, facilitating learning supports inside and outside of the classroom or school environment, and developing problem-solving skills and success attitudes that lead to higher performance in the specific subjects. The model is formulated based on best strategies and practices for enriching Academic Performance Impact skills and motivating students’ interests in STEM. A scaled model for measuring the Academic Performance Impact (API) index and STEM is discussed. The study correlates API with state standardized test and shows that the average impact of those skills can be predicted by the Academic Performance Impact (API) index or Academic Preparedness Index.Keywords: diversity, equity, graduate education, inclusion, inclusive excellence, model
Procedia PDF Downloads 20114983 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system
Procedia PDF Downloads 12414982 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks
Procedia PDF Downloads 14714981 Functional Expression and Characterization of a Novel Indigenous Endo-Beta 1,4- Glucanase from Apis mellifera
Authors: Amtul Jamil Sami
Abstract:
Apis mellifera is an insect of immense economic importance lives on rich carbohydrate diet including cellulose, nectar, honey and pollen. The carbohydrate metabolism in A mellifera has not been understood fully, as there are no data available, on the functional expression of cellulase gene. The cellulose hydrolyzing enzyme is required for the digestion of pollen cellulose wall, to release the important nutrients (amino acids, minerals, vitamins etc.) from the pollen. A dissection of Apis genome had revealed that there is one gene present for the expression of endo-beta-1,4-glucanase, for cellulose hydrolysis. In the presented work, functional expression of endo-beta-1,4 glucanase gene is reported. Total soluble proteins of the honey bee were isolated and were tested cellulose hydrolyzing enzyme activity, using carboxy-methyl cellulose, as a substrate. A mellifera proteins were able to hydrolyze carboxy-methyl cellulose, confirming its endo- type mode of action. Endo beta-1,4 glucanase enzyme was only present in the gut tissues, no activity was detected in the salivary glands. The pH optima of the enzyme were in the acidic pH range of 4-5-5-0, indicating its metabolic role in the acidic stomach of A mellifera. The reported enzyme is unique, as endo-beta- 1,4 glucanase was able to generate non reducing sugar, as an end product. The results presented, are supportive to the information that the honey bee is capable of producing its novel endo-beta-1,4 glucanase. Further it could be helpful, in understanding, the carbohydrate metabolism in A mellifera.Keywords: honey bees, Endo-beta 1, 4- glucanase, Apis mellifera, functional expression
Procedia PDF Downloads 40314980 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: computer vision, deep learning, object detection, semiconductor
Procedia PDF Downloads 13614979 A Novel Approach for the Analysis of Ground Water Quality by Using Classification Rules and Water Quality Index
Authors: Kamakshaiah Kolli, R. Seshadri
Abstract:
Water is a key resource in all economic activities ranging from agriculture to industry. Only a tiny fraction of the planet's abundant water is available to us as fresh water. Assessment of water quality has always been paramount in the field of environmental quality management. It is the foundation for health, hygiene, progress and prosperity. With ever increasing pressure of human population, there is severe stress on water resources. Therefore efficient water management is essential to civil society for betterment of quality of life. The present study emphasizes on the groundwater quality, sources of ground water contamination, variation of groundwater quality and its spatial distribution. The bases for groundwater quality assessment are groundwater bodies and representative monitoring network enabling determination of chemical status of groundwater body. For this study, water samples were collected from various areas of the entire corporation area of Guntur. Water is required for all living organisms of which 1.7% is available as ground water. Water has no calories or any nutrients, but essential for various metabolic activities in our body. Chemical and physical parameters can be tested for identifying the portability of ground water. Electrical conductivity, pH, alkalinity, Total Alkalinity, TDS, Calcium, Magnesium, Sodium, Potassium, Chloride, and Sulphate of the ground water from Guntur district: Different areas of the District were analyzed. Our aim is to check, if the ground water from the above areas are potable or not. As multivariate are present, Data mining technique using JRIP rules was employed for classifying the ground water.Keywords: groundwater, water quality standards, potability, data mining, JRIP, PCA, classification
Procedia PDF Downloads 43014978 Extraction of Road Edge Lines from High-Resolution Remote Sensing Images Based on Energy Function and Snake Model
Authors: Zuoji Huang, Haiming Qian, Chunlin Wang, Jinyan Sun, Nan Xu
Abstract:
In this paper, the strategy to extract double road edge lines from acquired road stripe image was explored. The workflow is as follows: the road stripes are acquired by probabilistic boosting tree algorithm and morphological algorithm immediately, and road centerlines are detected by thinning algorithm, so the initial road edge lines can be acquired along the road centerlines. Then we refine the results with big variation of local curvature of centerlines. Specifically, the energy function of edge line is constructed by gradient feature and spectral information, and Dijkstra algorithm is used to optimize the initial road edge lines. The Snake model is constructed to solve the fracture problem of intersection, and the discrete dynamic programming algorithm is used to solve the model. After that, we could get the final road network. Experiment results show that the strategy proposed in this paper can be used to extract the continuous and smooth road edge lines from high-resolution remote sensing images with an accuracy of 88% in our study area.Keywords: road edge lines extraction, energy function, intersection fracture, Snake model
Procedia PDF Downloads 33814977 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 14214976 A Unified Constitutive Model for the Thermoplastic/Elastomeric-Like Cyclic Response of Polyethylene with Different Crystal Contents
Authors: A. Baqqal, O. Abduhamid, H. Abdul-Hameed, T. Messager, G. Ayoub
Abstract:
In this contribution, the effect of crystal content on the cyclic response of semi-crystalline polyethylene is studied over a large strain range. Experimental observations on a high-density polyethylene with 72% crystal content and an ultralow density polyethylene with 15% crystal content are reported. The cyclic stretching does appear a thermoplastic-like response for high crystallinity and an elastomeric-like response for low crystallinity, both characterized by a stress-softening, a hysteresis and a residual strain, whose amount depends on the crystallinity and the applied strain. Based on the experimental observations, a unified viscoelastic-viscoplastic constitutive model capturing the polyethylene cyclic response features is proposed. A two-phase representation of the polyethylene microstructure allows taking into consideration the effective contribution of the crystalline and amorphous phases to the intermolecular resistance to deformation which is coupled, to capture the strain hardening, to a resistance to molecular orientation. The polyethylene cyclic response features are captured by introducing evolution laws for the model parameters affected by the microstructure alteration due to the cyclic stretching.Keywords: cyclic loading unloading, polyethylene, semi-crystalline polymer, viscoelastic-viscoplastic constitutive model
Procedia PDF Downloads 22414975 Ion Thruster Grid Lifetime Assessment Based on Its Structural Failure
Authors: Juan Li, Jiawen Qiu, Yuchuan Chu, Tianping Zhang, Wei Meng, Yanhui Jia, Xiaohui Liu
Abstract:
This article developed an ion thruster optic system sputter erosion depth numerical 3D model by IFE-PIC (Immersed Finite Element-Particle-in-Cell) and Mont Carlo method, and calculated the downstream surface sputter erosion rate of accelerator grid; Compared with LIPS-200 life test data, the results of the numerical model are in reasonable agreement with the measured data. Finally, we predict the lifetime of the 20cm diameter ion thruster via the erosion data obtained with the model. The ultimate result demonstrates that under normal operating condition, the erosion rate of the grooves wears on the downstream surface of the accelerator grid is 34.6μm⁄1000h, which means the conservative lifetime until structural failure occurring on the accelerator grid is 11500 hours.Keywords: ion thruster, accelerator gird, sputter erosion, lifetime assessment
Procedia PDF Downloads 56514974 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 6114973 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco
Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui
Abstract:
The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate
Procedia PDF Downloads 18814972 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
Authors: Ksheeraj Sai Vepuri, Nada Attar
Abstract:
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.Keywords: facial expression recognittion, image preprocessing, deep learning, CNN
Procedia PDF Downloads 14314971 Robust Model Predictive Controller for Uncertain Nonlinear Wheeled Inverted Pendulum Systems: A Tube-Based Approach
Authors: Tran Gia Khanh, Dao Phuong Nam, Do Trong Tan, Nguyen Van Huong, Mai Xuan Sinh
Abstract:
This work presents the problem of tube-based robust model predictive controller for a class of continuous-time systems in the presence of input disturbances. The main objective is to point out the state trajectory of closed system being maintained inside a sequence of tubes. An estimation of attraction region of the closed system is pointed out based on input state stability (ISS) theory and linearized model in each time interval. The theoretical analysis and simulation results demonstrate the performance of the proposed algorithm for a wheeled inverted pendulum system.Keywords: input state stability (ISS), tube-based robust MPC, continuous-time nonlinear systems, wheeled inverted pendulum
Procedia PDF Downloads 22014970 A Data Envelopment Analysis Model in a Multi-Objective Optimization with Fuzzy Environment
Authors: Michael Gidey Gebru
Abstract:
Most of Data Envelopment Analysis models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp Data Envelopment Analysis into Data Envelopment Analysis with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the Data Envelopment Analysis model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units' efficiency. Finally, the developed Data Envelopment Analysis model is illustrated with an application on real data 50 educational institutions.Keywords: efficiency, Data Envelopment Analysis, fuzzy, higher education, input, output
Procedia PDF Downloads 6014969 Fashion, Art and Culture in the Anthropological Management Model
Authors: Lucia Perez, Maria Gaton y Santa Palella
Abstract:
Starting from the etymology of the word culture, the Latin term ‘colere’, whose meaning is to cultivate, we understand that the society that cultivates its knowledge is laying the foundations for new possibilities. In this sense, art and fashion contain the same attributes: concept, aesthetic principles, and refined techniques. Both play a crucial role, communication, and this implies a sense of community, relationship with tradition, and innovation. This is the mirror in which to contemplate, but also the space that helps to grow. This is the framework where our object of study opens up: the anthropological management or the mission management model applied to fashion exhibitions in museums and cultural institutions. For this purpose, a bibliographic review has been carried out with its subsequent analysis, a case study of three successful exhibitions: ‘Christian Dior: designer of dreams’, ‘Balenciaga and the Spanish painting’, and ‘China: Through the Looking Glass’. The methodology has been completed with interviews focused on the curators. Amongst the results obtained, it is worth highlighting the fundamental role of transcendent leadership, which, in addition to being results-oriented, must align the motivations of the collaborators with the mission. The anthropological management model conceives management as a service, and it is oriented to the interests of the staff and the public, in short, of the person; this is what enables the objectives of effectiveness, efficiency, and social value to be achieved; dimensions, all necessary for the proper development of the mission of the exhibitions. Fashion, understood as art, is at the service of culture, and therefore of the human being, which defines a transcendent mission. We conclude that the profile of an anthropological management model applied to fashion exhibitions in museums is the ideal one to achieve the purpose of these institutions.Keywords: art, culture, fashion, anthropological model, fashion exhibitions
Procedia PDF Downloads 10314968 Design and Simulation of a Double-Stator Linear Induction Machine with Short Squirrel-Cage Mover
Authors: David Rafetseder, Walter Bauer, Florian Poltschak, Wolfgang Amrhein
Abstract:
A flat double-stator linear induction machine (DSLIM) with a short squirrel-cage mover is designed for high thrust force at moderate speed < 5m/s. The performance and motor parameters are determined on the basis of a 2D time-transient simulation with the finite element (FE) software Maxwell 2015. Design guidelines and transformation rules for space vector theory of the LIM are presented. Resulting thrust calculated by flux and current vectors is compared with the FE results showing good coherence and reduced noise. The parameters of the equivalent circuit model are obtained.Keywords: equivalent circuit model, finite element model, linear induction motor, space vector theory
Procedia PDF Downloads 56614967 Integrated Model for Enhancing Data Security Performance in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 47714966 Saliency Detection Using a Background Probability Model
Authors: Junling Li, Fang Meng, Yichun Zhang
Abstract:
Image saliency detection has been long studied, while several challenging problems are still unsolved, such as detecting saliency inaccurately in complex scenes or suppressing salient objects in the image borders. In this paper, we propose a new saliency detection algorithm in order to solving these problems. We represent the image as a graph with superixels as nodes. By considering appearance similarity between the boundary and the background, the proposed method chooses non-saliency boundary nodes as background priors to construct the background probability model. The probability that each node belongs to the model is computed, which measures its similarity with backgrounds. Thus we can calculate saliency by the transformed probability as a metric. We compare our algorithm with ten-state-of-the-art salient detection methods on the public database. Experimental results show that our simple and effective approach can attack those challenging problems that had been baffling in image saliency detection.Keywords: visual saliency, background probability, boundary knowledge, background priors
Procedia PDF Downloads 42914965 An Experimental Investigation into Fluid Forces on Road Vehicles in Unsteady Flows
Abstract:
In this research, the effect of unsteady flows acting on road vehicles was experimentally investigated, using an advanced and recently introduced wind tunnel. The aims of this study were to extract the characteristics of fluid forces acting on road vehicles under unsteady wind conditions and obtain new information on drag forces in a practical on-road test. We applied pulsating wind as a representative example of the atmospheric fluctuations that vehicles encounter on the road. That is, we considered the case where the vehicles are moving at constant speed in the air, with large wind oscillations. The experimental tests were performed on the Ahmed-type test model, which is a simplified vehicle model. This model was chosen because of its simplicity and the data accumulated under steady wind conditions. The experiments were carried out with a time-averaged Reynolds number of Re = 4.16x10⁵ and a pulsation period of T = 1.5 s, with amplitude of η = 0.235. Unsteady fluid forces of drag and lift were obtained utilizing a multi-component load cell. It was observed that the unsteady aerodynamic forces differ significantly from those under steady wind conditions. They exhibit a phase shift and an enhanced response to the wind oscillations. Furthermore, their behavior depends on the slant angle of the rear shape of the model.Keywords: Ahmed body, automotive aerodynamics, unsteady wind, wind tunnel test
Procedia PDF Downloads 29314964 Concrete Mixes for Sustainability
Authors: Kristyna Hrabova, Sabina Hüblova, Tomas Vymazal
Abstract:
Structural design of concrete structure has the result in qualities of structural safety and serviceability, together with durability, robustness, sustainability and resilience. A sustainable approach is at the heart of the research agenda around the world, and the Fibrillation Commission is also working on a new model code 2020. Now it is clear that the effects of mechanical, environmental load and even social coherence need to be reflected and included in the designing and evaluating structures. This study aimed to present the methodology for the sustainability assessment of various concrete mixtures.Keywords: concrete, cement, sustainability, Model Code 2020
Procedia PDF Downloads 17814963 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach
Authors: Adeep Hande, Shubham Agarwal
Abstract:
This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.Keywords: large language models, semi-supervised learning, sexism detection, data sparsity
Procedia PDF Downloads 7014962 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling
Authors: Taehan Bae
Abstract:
In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm
Procedia PDF Downloads 224