Search results for: approximate posterior distribution
3431 Sensory Weighting and Reweighting for Standing Postural Control among Children and Adolescents with Autistic Spectrum Disorder Compared with Typically Developing Children and Adolescents
Authors: Eglal Y. Ali, Smita Rao, Anat Lubetzky, Wen Ling
Abstract:
Background: Postural abnormalities, rigidity, clumsiness, and frequent falls are common among children with autism spectrum disorders (ASD). The central nervous system’s ability to process all reliable sensory inputs (weighting) and disregard potentially perturbing sensory input (reweighting) is critical for successfully maintaining standing postural control. This study examined how sensory inputs (visual and somatosensory) are weighted and reweighted to maintain standing postural control in children with ASD compared with typically developing (TD) children. Subjects: Forty (20 (TD) and 20 ASD) children and adolescents participated in this study. The groups were matched for age, weight, and height. Participants had normal somatosensory (no somatosensory hypersensitivity), visual, and vestibular perception. Participants with ASD were categorized with severity level 1 according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) and Social Responsiveness Scale. Methods: Using one force platform, the center of pressure (COP) was measured during quiet standing for 30 seconds, 3 times first standing on stable surface with eyes open (Condition 1), followed by randomization of the following 3 conditions: Condition 2 standing on stable surface with eyes closed, (visual input perturbed); Condition 3 standing on a compliant foam surface with eyes open, (somatosensory input perturbed); and Condition 4 standing on a compliant foam surface with eyes closed, (both visual and somatosensory inputs perturbed). Standing postural control was measured by three outcome measures: COP sway area, COP anterior-posterior (AP), and mediolateral (ML) path length (PL). A repeated measure mixed model analysis of variance was conducted to determine whether there was a significant difference between the two groups in the mean of the three outcome measures across the four conditions. Results: According to all three outcome measures, both groups showed a gradual increase in postural sway from condition 1 to condition 4. However, TD participants showed a larger postural sway than those with ASD. There was a significant main effect of the condition on three outcome measures (p< 0.05). Only the COP AP PL showed a significant main effect of the group (p<0.05) and a significant group by condition interaction (p<0.05). In COP AP PL, TD participants showed a significant difference between condition 2 and the baseline (p<0.05), whereas the ASD group did not. This suggests that the ASD group did not weigh visual input as much as the TD group. A significant difference between conditions for the ASD group was seen only when participants stood on foam regardless of the visual condition, suggesting that the ASD group relied more on the somatosensory inputs to maintain the standing postural control. Furthermore, the ASD group exhibited significantly smaller postural sway compared with TD participants during standing on a stable surface, whereas the postural sway of the ASD group was close to that of the TD group on foam. Conclusion: These results suggest that participants with high-functioning ASD (level 1, no somatosensory hypersensitivity in ankles and feet) over-rely on somatosensory inputs and use a stiffening strategy for standing postural control. This deviation in the reweighting mechanism might explain the postural abnormalities mentioned above among children with ASD.Keywords: autism spectrum disorders, postural sway, sensory weighting and reweighting, standing postural control
Procedia PDF Downloads 1233430 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 743429 Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin
Authors: Kemal Polat
Abstract:
In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not.Keywords: k-NN classifier, skin or non-skin classification, RGB values, classification
Procedia PDF Downloads 2523428 Developing a Cybernetic Model of Interdepartmental Logistic Interactions in SME
Authors: Jonas Mayer, Kai-Frederic Seitz, Thorben Kuprat
Abstract:
In today’s competitive environment production’s logistic objectives such as ‘delivery reliability’ and ‘delivery time’ and distribution’s logistic objectives such as ‘service level’ and ‘delivery delay’ are attributed great importance. Especially for small and mid-sized enterprises (SME) attaining these objectives pose a key challenge. Within this context, one of the difficulties is that interactions between departments within the enterprise and their specific objectives are insufficiently taken into account and aligned. Interdepartmental independencies along with contradicting targets set within the different departments result in enterprises having sub-optimal logistic performance capability. This paper presents a research project which will systematically describe the interactions between departments and convert them into a quantifiable form.Keywords: department-specific actuating and control variables, interdepartmental interactions, cybernetic model, logistic objectives
Procedia PDF Downloads 3763427 An Application of Quantile Regression to Large-Scale Disaster Research
Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede
Abstract:
Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.Keywords: disaster workers, post traumatic stress, PTSD, quantile regression
Procedia PDF Downloads 2853426 Nonlinear Pollution Modelling for Polymeric Outdoor Insulator
Authors: Rahisham Abd Rahman
Abstract:
In this paper, a nonlinear pollution model has been proposed to compute electric field distribution over the polymeric insulator surface under wet contaminated conditions. A 2D axial-symmetric insulator geometry, energized with 11kV was developed and analysed using Finite Element Method (FEM). A field-dependent conductivity with simplified assumptions was established to characterize the electrical properties of the pollution layer. Comparative field studies showed that simulation of dynamic pollution model results in a more realistic field profile, offering better understanding on how the electric field behaves under wet polluted conditions.Keywords: electric field distributions, pollution layer, dynamic model, polymeric outdoor insulators, finite element method (FEM)
Procedia PDF Downloads 4033425 A Proposal of Local Indentation Techniques for Mechanical Property Evaluation
Authors: G. B. Lim, C. H. Jeon, K. H. Jung
Abstract:
General light metal alloys are often developed in the material of transportation equipment such as automobiles and aircraft. Among the light metal alloys, magnesium is the lightest structural material with superior specific strength and many attractive physical and mechanical properties. However, magnesium alloys were difficult to obtain the mechanical properties at warm temperature. The aims of present work were to establish an analytical relation between mechanical properties and plastic flow induced by local indentation. An experimental investigation of the local strain distribution was carried out using a specially designed local indentation equipment in conjunction with ARAMIS based on digital image correlation method.Keywords: indentation, magnesium, mechanical property, lightweight material, ARAMIS
Procedia PDF Downloads 4983424 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 723423 Investigation of the Possible Correlation of Earthquakes with a Red Tide Occurrence in the Persian Gulf and Oman Sea
Authors: Hadis Hosseinzadehnaseri
Abstract:
The red tide is a kind of algae blooming, caused different problems at different sizes for the human life and the environment, so it has become one of the serious global concerns in the field of Oceanography in few recent decades. This phenomenon has affected on Iran's water, especially the Persian Gulf's since last few years. Collecting data associated with this phenomenon and comparison in different parts of the world is significant as a practical way to study this phenomenon and controlling it. Effective factors to occur this phenomenon lead to the increase of the required nutrients of the algae or provide a good environment for blooming. In this study, we examined the probability of relation between the earthquake and the harmful algae blooming in the Persian Gulf's water through comparing the earthquake data and the recorded Red tides. On the one hand, earthquakes can cause changes in seawater temperature that is effective in creating a suitable environment and the other hand, it increases the possibility of water nutrients, and its transportation in the seabed, so it can play a principal role in the development of red tide occurrence. Comparing the distribution spatial-temporal maps of the earthquakes and deadly red tides in the Persian Gulf and Oman Sea, confirms the hypothesis, why there is a meaningful relation between these two distributions. Comparing the number of earthquakes around the world as well as the number of the red tides in many parts of the world indicates the correlation between these two issues. This subject due to numerous earthquakes, especially in recent years and in the southern part of the country should be considered as a warning to the possibility of re-occurrence of a critical state of red tide in a large scale, why in the year 2008, the number of recorded earthquakes have been more than near years. In this year, the distribution value of the red tide phenomenon in the Persian Gulf got measured about 140,000 square kilometers and entire Oman Sea, with 10 months Survival in the area, which is considered as a record among the occurred algae blooming in the world. In this paper, we could obtain a logical and reasonable relation between the earthquake frequency and this phenomenon occurrence, through compilation of statistics relating to the earthquakes in the southern Iran, from 2000 to the end of the first half of 2013 and also collecting statistics on the occurrence of red tide in the region as well as examination of similar data in different parts of the world. As shown in Figure 1, according to a survey conducted on the earthquake data, the most earthquakes in the southern Iran ranks first in the fourth Gregorian calendar month In April, coincided with Ordibehesht and Khordad in Persian calendar and then in the tenth Gregorian calendar month In October, coincided in Aban and Azar in Persian calendar.Keywords: red tide, earth quake, persian gulf, harmful algae bloom
Procedia PDF Downloads 5023422 Stress Analysis of Turbine Blades of Turbocharger Using Structural Steel
Authors: Roman Kalvin, Anam Nadeem, Saba Arif
Abstract:
Turbocharger is a device that is driven by the turbine and increases efficiency and power output of the engine by forcing external air into the combustion chamber. This study focused on the distribution of stress on the turbine blades and total deformation that may occur during its working along with turbocharger to carry out its static structural analysis of turbine blades. Structural steel was selected as the material for turbocharger. Assembly of turbocharger and turbine blades was designed on PRO ENGINEER. Furthermore, the structural analysis is performed by using ANSYS. This research concluded that by using structural steel, the efficiency of engine is improved and by increasing number of turbine blades, more waste heat from combustion chamber is emitted.Keywords: turbocharger, turbine blades, structural steel, ANSYS
Procedia PDF Downloads 2473421 Defect Profile Simulation of Oxygen Implantation into Si and GaAs
Authors: N. Dahbi, R. B. Taleb
Abstract:
This study concerns the ion implantation of oxygen in two semiconductors Si and GaAs realized by a simulation using the SRIM tool. The goal of this study is to compare the effect of implantation energy on the distribution of implant ions in the two targets and to examine the different processes resulting from the interaction between the ions of oxygen and the target atoms (Si, GaAs). SRIM simulation results indicate that the implanted ions have a profile as a function of Gaussian-type; oxygen produced more vacancies and implanted deeper in Si compared to GaAs. Also, most of the energy loss is due to ionization and phonon production, where vacancy production amounts to few percent of the total energy.Keywords: defect profile, GaAs, ion implantation, SRIM, phonon production, vacancies
Procedia PDF Downloads 1903420 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 4243419 Airborne Molecular Contamination in Clean Room Environment
Authors: T. Rajamäki
Abstract:
In clean room environment molecular contamination in very small concentrations can cause significant harm for the components and processes. This is commonly referred as airborne molecular contamination (AMC). There is a shortage of high sensitivity continuous measurement data for existence and behavior of several of these contaminants. Accordingly, in most cases correlation between concentration of harmful molecules and their effect on processes is not known. In addition, the formation and distribution of contaminating molecules are unclear. In this work sensitive optical techniques are applied in clean room facilities for investigation of concentrations, forming mechanisms and effects of contaminating molecules. Special emphasis is on reactive acid and base gases ammonia (NH3) and hydrogen fluoride (HF). They are the key chemicals in several operations taking place in clean room processes.Keywords: AMC, clean room, concentration, reactive gas
Procedia PDF Downloads 2853418 Multi-Scale Modeling of Ti-6Al-4V Mechanical Behavior: Size, Dispersion and Crystallographic Texture of Grains Effects
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vidal, Farhad Rezai-Aria, Christine Boher
Abstract:
Ti-6Al-4V titanium alloy is one of the most widely used materials in aeronautical and aerospace industries. Because of its high specific strength, good fatigue, and corrosion resistance, this alloy is very suitable for moderate temperature applications. At room temperature, Ti-6Al-4V mechanical behavior is generally controlled by the behavior of alpha phase (beta phase percent is less than 8%). The plastic strain of this phase notably based on crystallographic slip can be hindered by various obstacles and mechanisms (crystal lattice friction, sessile dislocations, strengthening by solute atoms and grain boundaries…). The grains aspect of alpha phase (its morphology and texture) and the nature of its crystallographic lattice (which is hexagonal compact) give to plastic strain heterogeneous, discontinuous and anisotropic characteristics at the local scale. The aim of this work is to develop a multi-scale model for Ti-6Al-4V mechanical behavior using crystal plasticity approach; this multi-scale model is used then to investigate grains size, dispersion of grains size, crystallographic texture and slip systems activation effects on Ti-6Al-4V mechanical behavior under monotone quasi-static loading. Nine representative elementary volume (REV) are built for taking into account the physical elements (grains size, dispersion and crystallographic) mentioned above, then boundary conditions of tension test are applied. Finally, simulation of the mechanical behavior of Ti-6Al-4V and study of slip systems activation in alpha phase is reported. The results show that the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior of Ti-6Al-4V alloy modeled. The grains size influences also on mechanical proprieties of Ti-6Al-4V, especially on the yield stress; by decreasing of the grain size, the yield strength increases. Finally, the grains' distribution which characterizes the morphology aspect (homogeneous or heterogeneous) gives to the deformation fields distribution enough heterogeneity because the crystallographic slip is easier in large grains compared to small grains, which generates a localization of plastic deformation in certain areas and a concentration of stresses in others.Keywords: multi-scale modeling, Ti-6Al-4V alloy, crystal plasticity, grains size, crystallographic texture
Procedia PDF Downloads 1593417 Case Report: Ocular Helminth – In Unusual Site (Lens)
Authors: Chandra Shekhar Majumder, Shamsul Haque, Khondaker Anower Hossain, Rafiqul Islam
Abstract:
Introduction: Ocular helminths are parasites that infect the eye or its adnexa. They can be either motile worms or sessile worms that form cysts. These parasites require two hosts for their life cycle, a definite host (usually a human) and an intermediate host (usually an insect). While there have been reports of ocular helminths infecting various structures of the eye, including the anterior chamber and subconjunctival space, there is no previous record of such a case involving the lens. Research Aim: The aim of this case report is to present a rare case of ocular helminth infection in the lens and to contribute to the understanding of this unusual site of infection. Methodology: This study is a case report, presenting the details and findings of an 80-year-old retired policeman who presented with severe pain, redness, and vision loss in the left eye. The examination revealed the presence of a thread-like helminth in the lens. The data for this case report were collected through clinical examination and medical records of the patient. The findings were described and presented in a descriptive manner. No statistical analysis was conducted. Case report: An 80-year-old retired policeman attended the OPD, Faridpur Medical College Hospital with the complaints of severe pain, redness and gross dimness of vision of the left eye for 5 days. He had a history of diabetes mellitus and hypertension for 3 years. On examination, L/E visual acuity was PL only, moderate ciliary congestion, KP 2+, cells 2+ and posterior synechia from 5 to 7 O’clock position was found. Lens was opaque. A thread like helminth was found under the anterior of the lens. The worm was moving and changing its position during examination. On examination of R/E, visual acuity was 6/36 unaided, 6/18 with pinhole. There was lental opacity. Slit-lamp and fundus examination were within normal limit. Patient was admitted in Faridpur Medical College Hospital. Diabetes mellitus was controlled with insulin. ICCE with PI was done on the same day of admission under depomedrol coverage. The helminth was recovered from the lens. It was thread like, about 5 to 6 mm in length, 1 mm in width and pinkish in colour. The patient followed up after 7 days, VA was HM, mild ciliary congestion, few KPs and cells were present. Media was hazy due to vitreous opacity. The worm was sent to the department of Parasitology, NIPSOM, Dhaka for identification. Theoretical Importance: This case report contributes to the existing literature on ocular helminth infections by reporting a unique case involving the lens. It highlights the need for further research to understand the mechanism of entry of helminths in the lens. Conclusion: To the best of our knowledge, this is the first reported case of ocular helminth infection in the lens. The presence of the helminth in the lens raises interesting questions regarding its pathogenesis and entry mechanism. Further study and research are needed to explore these aspects. Ophthalmologists and parasitologists should be aware of the possibility of ocular helminth infections in unusual sites like the lens.Keywords: helminth, lens, ocular, unusual
Procedia PDF Downloads 453416 Phytoremediation Potential of Tomato for Cd and Cr Removal from Polluted Soils
Authors: Jahanshah Saleh, Hossein Ghasemi, Ali Shahriari, Faezeh Alizadeh, Yaaghoob Hosseini
Abstract:
Cadmium and chromium are toxic to most organisms and different mechanisms have been developed for overcoming with the toxic effects of these heavy metals. We studied the uptake and distribution of cadmium and chromium in different organs of tomato (Lycopersicon esculentum L.) plants in nine heavy metal polluted soils in western Hormozgan province, Iran. The accumulation of chromium was in increasing pattern of fruit peelKeywords: cadmium, chromium, phytoextraction, phytostabilization, tomato
Procedia PDF Downloads 3513415 Modulational Instability of Ion-Acoustic Wave in Electron-Positron-Ion Plasmas with Two-Electron Temperature Distributions
Authors: Jitendra Kumar Chawla, Mukesh Kumar Mishra
Abstract:
The nonlinear amplitude modulation of ion-acoustic wave is studied in the presence of two-electron temperature distribution in unmagnetized electron-positron-ion plasmas. The Krylov-Bogoliubov-Mitropolosky (KBM) perturbation method is used to derive the nonlinear Schrödinger equation. The dispersive and nonlinear coefficients are obtained which depend on the temperature and concentration of the hot and cold electron species as well as the positron density and temperature. The modulationally unstable regions are studied numerically for a wide range of wave number. The effects of the temperature and concentration of the hot and cold electron on the modulational stability are investigated in detail.Keywords: modulational instability, ion acoustic wave, KBM method
Procedia PDF Downloads 6693414 Microwave Dielectric Relaxation Study of Diethanolamine with Triethanolamine from 10 MHz-20 GHz
Authors: A. V. Patil
Abstract:
The microwave dielectric relaxation study of diethanolamine with triethanolamine binary mixture have been determined over the frequency range of 10 MHz to 20 GHz, at various temperatures using time domain reflectometry (TDR) method for 11 concentrations of the system. The present work reveals molecular interaction between same multi-functional groups [−OH and –NH2] of the alkanolamines (diethanolamine and triethanolamine) using different models such as Debye model, Excess model, and Kirkwood model. The dielectric parameters viz. static dielectric constant (ε0) and relaxation time (τ) have been obtained with Debye equation characterized by a single relaxation time without relaxation time distribution by the least squares fit method.Keywords: diethanolamine, excess properties, kirkwood properties, time domain reflectometry, triethanolamine
Procedia PDF Downloads 3063413 A Monitoring System to Detect Vegetation Growth along the Route of Power Overhead Lines
Authors: Eugene Eduful
Abstract:
This paper introduces an approach that utilizes a Wireless Sensor Network (WSN) to detect vegetation encroachment between segments of distribution lines. The WSN was designed and implemented, involving the seamless integration of Arduino Uno and Mega systems. This integration demonstrates a method for addressing the challenges posed by vegetation interference. The primary aim of the study is to improve the reliability of power supply in areas characterized by forested terrain, specifically targeting overhead powerlines. The experimental results validate the effectiveness of the proposed system, revealing its ability to accurately identify and locate instances of vegetation encroachment with a remarkably high degree of precision.Keywords: wireless sensor network, vegetation encroachment, line of sight, Arduino Uno, XBEE
Procedia PDF Downloads 763412 Effect of Viscous Dissipation on 3-D MHD Casson Flow in Presence of Chemical Reaction: A Numerical Study
Authors: Bandari Shanker, Alfunsa Prathiba
Abstract:
The influence of viscous dissipation on MHD Casson 3-D fluid flow in two perpendicular directions past a linearly stretching sheet in the presence of a chemical reaction is explored in this work. For exceptional circumstances, self-similar solutions are obtained and compared to the given data. The enhancement in the values Ecert number the temperature boundary layer increases. Further, the current findings are observed to be in great accord with the existing data. In both directions, non - dimensional velocities and stress distribution are achieved. The relevant data are graphed and explained quantitatively in relation to changes in the Casson fluid parameter as well as other fluid flow parameters.Keywords: viscous dissipation, 3-D Casson flow, chemical reaction, Ecert number
Procedia PDF Downloads 1953411 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 1603410 Gas Pressure Evaluation through Radial Velocity Measurement of Fluid Flow Modeled by Drift Flux Model
Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes
Abstract:
In this paper, we consider a drift flux mixture model of the blood flow. The mixture consists of gas phase which is carbon dioxide and liquid phase which is an aqueous carbon dioxide solution. This model was used to determine the distributions of the mixture velocity, the mixture pressure, and the carbon dioxide pressure. These theoretical data are used to determine a measurement method of mean gas pressure through the determination of radial velocity distribution. This method can be applicable in experimental domain.Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity
Procedia PDF Downloads 3273409 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout
Authors: Diamant Irene, Dar Tamar
Abstract:
Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.Keywords: conservation of resources, burnout, time pressure, time perspective
Procedia PDF Downloads 1783408 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 3013407 Further Investigation of α+12C and α+16O Elastic Scattering
Authors: Sh. Hamada
Abstract:
The current work aims to study the rainbow like-structure observed in the elastic scattering of alpha particles on both 12C and 16O nuclei. We reanalyzed the experimental elastic scattering angular distributions data for α+12C and α+16O nuclear systems at different energies using both optical model and double folding potential of different interaction models such as: CDM3Y1, DDM3Y1, CDM3Y6 and BDM3Y1. Potential created by BDM3Y1 interaction model has the shallowest depth which reflects the necessity to use higher renormalization factor (Nr). Both optical model and double folding potential of different interaction models fairly reproduce the experimental data.Keywords: density distribution, double folding, elastic scattering, nuclear rainbow, optical model
Procedia PDF Downloads 2383406 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco
Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi
Abstract:
In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco
Procedia PDF Downloads 4663405 Spare Part Carbon Footprint Reduction with Reman Applications
Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş
Abstract:
Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability
Procedia PDF Downloads 843404 Application of Adaptive Particle Filter for Localizing a Mobile Robot Using 3D Camera Data
Authors: Maysam Shahsavari, Seyed Jamalaldin Haddadi
Abstract:
There are several methods to localize a mobile robot such as relative, absolute and probabilistic. In this paper, particle filter due to its simple implementation and the fact that it does not need to know to the starting position will be used. This method estimates the position of the mobile robot using a probabilistic distribution, relying on a known map of the environment instead of predicting it. Afterwards, it updates this estimation by reading input sensors and control commands. To receive information from the surrounding world, distance to obstacles, for example, a Kinect is used which is much cheaper than a laser range finder. Finally, after explaining the Adaptive Particle Filter method and its implementation in detail, we will compare this method with the dead reckoning method and show that this method is much more suitable for situations in which we have a map of the environment.Keywords: particle filter, localization, methods, odometry, kinect
Procedia PDF Downloads 2693403 On Musical Information Geometry with Applications to Sonified Image Analysis
Authors: Shannon Steinmetz, Ellen Gethner
Abstract:
In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content.Keywords: sonification, musical information geometry, image, content extraction, automated quantification, audio segmentation, pattern recognition
Procedia PDF Downloads 2443402 Exploring the Impacts of Ogoni/African Indigenous Knowledge in Addressing Environmental Issues in Ogoniland, Nigeria
Authors: Lele Dominic Dummene
Abstract:
Environmental issues are predominant in rural areas where indigenous people reside. These environmental issues cover environmental, health, social, economic, and political issues that emanate from poor environmental management and unfair distribution of environmental resources. These issues have greatly affected the lives of the indigenous people and their daily activities. As these environmental issues grow in communities, environmental experts, scientists, and theorists have proposed and developed methods, policies, and strategies to address these environmental-related issues in indigenous communities. Thus, this paper explores how the Ogoni indigenous knowledge and cultural practices could be used to address environmental issues such as oil pollution and other environmental-related issues that have destroyed the Ogoni environment.Keywords: Ogoniland, indigenous knowledge, environment, environmental education
Procedia PDF Downloads 125