Search results for: vehicle problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8367

Search results for: vehicle problem

5907 A Method to Compute Efficient 3D Helicopters Flight Trajectories Based On a Motion Polymorph-Primitives Algorithm

Authors: Konstanca Nikolajevic, Nicolas Belanger, David Duvivier, Rabie Ben Atitallah, Abdelhakim Artiba

Abstract:

Finding the optimal 3D path of an aerial vehicle under flight mechanics constraints is a major challenge, especially when the algorithm has to produce real-time results in flight. Kinematics models and Pythagorian Hodograph curves have been widely used in mobile robotics to solve this problematic. The level of difficulty is mainly driven by the number of constraints to be saturated at the same time while minimizing the total length of the path. In this paper, we suggest a pragmatic algorithm capable of saturating at the same time most of dimensioning helicopter 3D trajectories’ constraints like: curvature, curvature derivative, torsion, torsion derivative, climb angle, climb angle derivative, positions. The trajectories generation algorithm is able to generate versatile complex 3D motion primitives feasible by a helicopter with parameterization of the curvature and the climb angle. An upper ”motion primitives’ concatenation” algorithm is presented based. In this article we introduce a new way of designing three-dimensional trajectories based on what we call the ”Dubins gliding symmetry conjecture”. This extremely performing algorithm will be soon integrated to a real-time decisional system dealing with inflight safety issues.

Keywords: robotics, aerial robots, motion primitives, helicopter

Procedia PDF Downloads 613
5906 Artificial Intelligence in Penetration Testing of a Connected and Autonomous Vehicle Network

Authors: Phillip Garrad, Saritha Unnikrishnan

Abstract:

The recent popularity of connected and autonomous vehicles (CAV) corresponds with an increase in the risk of cyber-attacks. These cyber-attacks have been instigated by both researchers or white-coat hackers and cyber-criminals. As Connected Vehicles move towards full autonomy, the impact of these cyber-attacks also grows. The current research details challenges faced in cybersecurity testing of CAV, including access and cost of the representative test setup. Other challenges faced are lack of experts in the field. Possible solutions to how these challenges can be overcome are reviewed and discussed. From these findings, a software simulated CAV network is established as a cost-effective representative testbed. Penetration tests are then performed on this simulation, demonstrating a cyber-attack in CAV. Studies have shown Artificial Intelligence (AI) to improve runtime, increase efficiency and comprehensively cover all the typical test aspects in penetration testing in other industries. There is an attempt to introduce similar AI models to the software simulation. The expectation from this implementation is to see similar improvements in runtime and efficiency for the CAV model. If proven to be an effective means of penetration test for CAV, this methodology may be used on a full CAV test network.

Keywords: cybersecurity, connected vehicles, software simulation, artificial intelligence, penetration testing

Procedia PDF Downloads 103
5905 Delay-Dependent Passivity Analysis for Neural Networks with Time-Varying Delays

Authors: H. Y. Jung, Jing Wang, J. H. Park, Hao Shen

Abstract:

This brief addresses the passivity problem for neural networks with time-varying delays. The aim is focus on establishing the passivity condition of the considered neural networks.

Keywords: neural networks, passivity analysis, time-varying delays, linear matrix inequality

Procedia PDF Downloads 563
5904 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation

Authors: Vadim Vagin, Marina Fomina, Oleg Morosin

Abstract:

This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.

Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization

Procedia PDF Downloads 439
5903 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves

Authors: Aymen Laadhari

Abstract:

During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.

Keywords: eulerian, level set, newton, valve

Procedia PDF Downloads 275
5902 The Response of 4-Hydroxybenzoic Acid on Kv1.4 Potassium Channel Subunit Expressed in Xenopus laevis Oocytes

Authors: Fatin H. Mohamad, Jia H. Wong, Muhammad Bilal, Abdul A. Mohamed Yusoff, Jafri M. Abdullah, Jingli Zhang

Abstract:

Kv1.4 is a Shaker-related member of voltage-gated potassium channel which can be associated with cardiac action potential but can also be found in Schaffer collateral and dentate gyrus. It has two inactivation mechanisms; the fast N-type and slow C-type. Kv1.4 produces rapid current inactivation. This A type potential of Kv1.4 makes it as a target in antiepileptic drugs (AEDs) selection. In this study, 4-hydroxybenzoic acid, which can be naturally found in bamboo shoots, were tested on its enhancement effect on potassium current of Kv1.4 channel expressed in Xenopus laevis oocytes using the two-microelectrode voltage clamp method. Current obtained were recorded and analyzed with pClamp software whereas statistical analysis were done by student t-test. The ratio of final / peak amplitude is an index of the activity of the Kv1.4 channel. The less the ratio, the greater the function of Kv1.4. The decrease of ratio of which by 1µM 4-hydroxybenzoic acid (n= 7), compared with 0.1% DMSO (vehicle), was mean= 47.62%, SE= 13.76%, P= 0.026 (statistically significant). It indicated more opening of Kv1.4 channels under 4-hydroxybenzoic acid. In conclusion, 4-hydroxybenzoic acid can enhance the function of Kv1.4 potassium channels, which is regarded as one of the mechanisms of antiepileptic treatment.

Keywords: antiepileptic, Kv1.4 potassium channel, two-microelectrode voltage clamp, Xenopus laevis oocytes, 4-hydroxybenzoic acid

Procedia PDF Downloads 357
5901 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 313
5900 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis

Authors: Liliia N. Butymova, Vladimir Ya Modorskii

Abstract:

To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.

Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration

Procedia PDF Downloads 291
5899 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 217
5898 Providing a Road Pricing and Toll Allocation Method for Toll Roads

Authors: Ali Babaei

Abstract:

There is a worldwide growing tendency toward construction of infrastructures with the possibility of private sector participation instead of free exploitation of public infrastructures. The construction and development of roads through private sector participation is performed by different countries because of appropriate results and benefits such as compensation of public budget deficit in road construction and maintenance and responding to traffic growth (demand). Toll is the most definite form of budget provision in road development. There are two issues in the toll rate assignment: A. costing of transport, B. Cost allocation and distribution of cost between different types of vehicles as each vehicle pay its own share. There can be different goals in toll collection and its extent is variable according to the strategy of toll collection. Costing principles in different countries are based on inclusion of the whole transport and not peculiar to the toll roads. For example, fuel tax policy functions where the road network users pay transportation cost (not just users of toll road). Whereas transportation infrastructures in Iran are free, these methods are not applicable. In Iran, different toll freeways have built by public investment and government provides participation in the road construction through encouragement of financial institutions. In this paper, the existing policies about the toll roads are studied and then the appropriate method of costing and cost allocation to different vehicles is introduced.

Keywords: toll allocation, road pricing, transportation, financial and industrial systems

Procedia PDF Downloads 358
5897 Academic Motivation Maintenance for Students While Solving Mathematical Problems in the Middle School

Authors: M. Rodionov, Z. Dedovets

Abstract:

The level and type of student academic motivation are the key factors in their development and determine the effectiveness of their education. Improving motivation is very important with regard to courses on middle school mathematics. This article examines the general position regarding the practice of academic motivation. It also examines the particular features of mathematical problem solving in a school setting.

Keywords: teaching strategy, mathematics, motivation, student

Procedia PDF Downloads 444
5896 Legal Problems with the Thai Political Party Establishment

Authors: Paiboon Chuwatthanakij

Abstract:

Each of the countries around the world has different ways of management and many of them depend on people to administrate their country. Thailand, for example, empowers the sovereignty of Thai people under constitution; however, our Thai voting system is not able to flow fast enough under the current Political management system. The sovereignty of Thai people is addressing this problem through representatives during current elections, in order to set a new policy for the countries ideology to change in the House and the Cabinet. This is particularly important in a democracy to be developed under our current political institution. The Organic Act on Political Parties 2007 is the establishment we have today that is causing confrontations within the establishment. There are many political parties that will soon be abolished. Many political parties have already been subsidized. This research study is to analyze the legal problems with the political party establishment under the Organic Act on Political Parties 2007. This will focus on the freedom of each political establishment compared to an effective political operation. Textbooks and academic papers will be referenced from studies home and abroad. The study revealed that Organic Act on Political Parties 2007 has strict provisions on the political structure over the number of members and the number of branches involved within political parties system. Such operations shall be completed within one year; but under the existing laws the small parties are not able to participate with the bigger parties. The cities are capable of fulfilling small political party requirements but fail to become coalesced because the current laws won't allow them to be united as one. It is important to allow all independent political parties to join our current political structure. Board members can’t help the smaller parties to become a large organization under the existing Thai laws. Creating a new establishment that functions efficiently throughout all branches would be one solution to these legal problems between all political parties. With this new operation, individual political parties can participate with the bigger parties during elections. Until current political institutions change their system to accommodate public opinion, these current Thai laws will continue to be a problem with all political parties in Thailand.

Keywords: coalesced, political party, sovereignty, elections

Procedia PDF Downloads 308
5895 Ethical Concerns in the Internet of Things and Smart Devices: Case Studies and Analysis

Authors: Mitchell Browe, Oriehi Destiny Anyaiwe, Zahraddeen Gwarzo

Abstract:

The Internet of Things (IoT) is a major evolution of technology and of the internet, which has the power to revolutionize the way people live. IoT has the power to change the way people interact with each other and with their homes; It has the ability to give people new ways to interact with and monitor their health; It can alter socioeconomic landscapes by providing new and efficient methods of resource management, saving time and money for both individuals and society as a whole; It even has the potential to save lives through autonomous vehicle technology and smart security measures. Unfortunately, nearly every revolution bears challenges which must be addressed to minimize harm by the new technology upon its adopters. IoT represents an internet technology revolution which has the potential to risk privacy, safety, and security of its users, should devices be developed, implemented, or utilized improperly. This article examines past and current examples of these ethical faults in an attempt to highlight the importance of consumer awareness of potential dangers of these technologies in making informed purchasing and utilization decisions, as well as to reveal how deficiencies and limitations of IoT devices should be better addressed by both companies and by regulatory bodies. Aspects such as consumer trust, corporate transparency, and misuse of individual data are all factors in the implementation of proper ethical boundaries in the IoT.

Keywords: IoT, ethical concerns, privacy, safety, security, smart devices

Procedia PDF Downloads 79
5894 Characterization of Shiga Toxin Escherichia coli Recovered from a Beef Processing Facility within Southern Ontario and Comparative Performance of Molecular Diagnostic Platforms

Authors: Jessica C. Bannon, Cleso M. Jordao Jr., Mohammad Melebari, Carlos Leon-Velarde, Roger Johnson, Keith Warriner

Abstract:

There has been an increased incidence of non-O157 Shiga Toxin Escherichia coli (STEC) with six serotypes (Top 6) being implicated in causing haemolytic uremic syndrome (HUS). Beef has been suggested to be a significant vehicle for non-O157 STEC although conclusive evidence has yet to be obtained. The following aimed to determine the prevalence of the Top 6 non-O157 STEC in beef processing using three different diagnostic platforms then characterize the recovered isolates. Hide, carcass and environmental swab samples (n = 60) were collected from a beef processing facility over a 12 month period. Enriched samples were screened using Biocontrol GDS, BAX or PALLgene molecular diagnostic tests. Presumptive non-O157 STEC positive samples were confirmed using conventional PCR and serology. STEC was detected by GDS (55% positive), BAX (85% positive), and PALLgene (93%). However, during confirmation testing only 8 of the 60 samples (13%) were found to harbour STEC. Interestingly, the presence of virulence factors in the recovered isolates was unstable and readily lost during subsequent sub-culturing. There is a low prevalence of Top 6 non-O157 STEC associated with beef although other serotypes are encountered. Yet, the instability of the virulence factors in recovered strains would question their clinical relevance.

Keywords: beef, food microbiology, shiga toxin, STEC

Procedia PDF Downloads 458
5893 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 424
5892 Health Ramifications of Workplace Bullying: Gender, Race and Sexual Orientation as Risk Factors

Authors: Kathleen Canul

Abstract:

Bullying is on the rise according to several recent studies. Workplace bullying has garnered less attention than other forms yet incidence rates range from 35-45%. The consequences of being bullied at work are broad, ranging from physiological to psychological to occupational. As the bullying progresses, employees begin to exhibit physical and psychological symptoms. Blood pressure rises, along with other cardiac related concerns. For men, covert coping with job unfairness was associated with a four-fold risk of heart attack and death. Gastrointestinal distress, headaches, muscle tension, sleep disorders and exhaustion are also common. Workplace bullying appears to contribute to the risk of subsequent psychotropic medication, as well. Emotionally, anxiety and depression increase along with lowered self-esteem and problems concentrating on the duties of the job. In an attempt to cope, individuals may succumb to unhealthy practices involving food, alcohol and other drugs. Patterns of bullying vary by gender, race, and ethnicity, as well as sexual orientation, with women, ethnic minorities and LGBTQ employees reporting higher rates of bullying in the workplace. Not only is this an issue of inequity on the job, but also a problem of health disparities as there are few mental health professionals confident and competent in dealing with workplace bullying issues, and the lack of culturally competent clinicians exacerbates this inequality in receiving adequate care. Alone, the topic of workplace bullying is not unique; however, the diverse experiences of underrepresented groups who disproportionately are affected on the job and suffer untreated, health related concerns represent a significant and emerging problem requiring attention. Conference participants who have experienced, witnessed or help those bullied on the job would benefit most from this review of the literature on the consequences of bullying experienced by diverse and underrepresented groups in the workplace.

Keywords: bullying, ethnic minorities, health disparities, workplace conflict

Procedia PDF Downloads 278
5891 Accidental Electrocution, Reconstruction of Events

Authors: Y. P. Raghavendra Babu

Abstract:

Electrocution is a common cause of morbidity and mortality as electricity is an indispensible part of today’s World. Deaths due to electrocution which are witnessed do not pose a problem at the manner and cause of death. However un-witnessed deaths can raise suspicion of manner of death. A case of fatal electrocution is reported here which was diagnosed to be accidental in manner with the help of reconstruction of events by proper investigation.

Keywords: electrocution, manner of death, reconstruction of events, health information

Procedia PDF Downloads 258
5890 Energy Absorption Characteristic of a Coupler Rubber Buffer Used in Rail Vehicles

Authors: Zhixiang Li, Shuguang Yao, Wen Ma

Abstract:

Coupler rubber buffer has been widely applied on the high-speed trains and the main function of the rubber buffer is dissipating the impact energy between vehicles. The rubber buffer consists of two groups of rubbers, which are both pre-compressed and then installed into the frame body. This paper focuses on the energy absorption characteristics of the rubber buffers particularly. Firstly, the quasi-static compression tests were carried out for 1 and 3 pairs of rubber sheets and some energy absorption responses relationship, i.e. Eabn = n×Eab1, Edissn = n×Ediss1, and Ean = Ea1, were obtained. Next, a series of quasi-static tests were performed for 1 pair of rubber sheet to investigate the energy absorption performance with different compression ratio of the rubber buffers. Then the impact tests with five impact velocities were conducted and the coupler knuckle was destroyed when the impact velocity was 10.807 km/h. The impact tests results showed that with the increase of impact velocity, the Eab, Ediss and Ea of rear buffer increased a lot, but the three responses of front buffer had not much increase. Finally, the results of impact tests and quasi-static tests were contrastively analysed and the results showed that with the increase of the stroke, the values of Eab, Ediss, and Ea were all increase. However, the increasing rates of impact tests were all larger than that of quasi-static tests. The maximum value of Ea was 68.76% in impact tests, it was a relatively high value for vehicle coupler buffer. The energy capacity of the rear buffer was determined for dynamic loading, it was 22.98 kJ.

Keywords: rubber buffer, coupler, energy absorption, impact tests

Procedia PDF Downloads 190
5889 Comparative Assessment of Microplastic Pollution in Surface Water and Sediment of the Gomati and Saryu Rivers, India

Authors: Amit K. Mishra, Jaswant Singh

Abstract:

The menace of plastic, which significantly pollutes the aquatic environment, has emerged as a global problem. There is an emerging concern about microplastics (MPs) accumulation in aquatic ecosystems. It is familiar to everyone that the ultimate end for most of the plastic debris is the ocean. Rivers are the efficient carriers for transferring MPs from terrestrial to aquatic, further from upstream to downstream areas, and ultimately to oceans. The root cause study can provide an effective solution to a problem; hence, tracing of MPs in the riverine system can illustrate the long-term microplastic pollution. This study aimed to investigate the occurrence and distribution of microplastic contamination in surface water and sediment of the two major river systems of Uttar Pradesh, India. One is the Gomti River, Lucknow, a tributary of the Ganga, and the second is the Saryu River, the lower part of the Ghagra River, which flows through the city of Ayodhya. In this study, the distribution and abundance of MPs in surface water and sediments of two rivers were compared. Samples of water and sediment were collected from different (four from each river) sampling stations in the river catchment of two rivers. Plastic particles were classified according to type, shape, and color. In this study, 1523 (average abundance 254) and 143 (average abundance 26) microplastics were identified in all studied sites in the Gomati River and Saryu River, respectively. Observations on samples of water showed that the average MPs concentration was 392 (±69.6) and 63 ((±18.9) particles per 50l of water, whereas the sediment sample showed that the average MPs concentration was 116 (±42.9) and 46 (±12.5) particles per 250gm of dry sediment in the Gomati River and Saryu River, respectively. The high concentration of microplastics in the Lucknow area can be attributed to human activities, population density, and the entry of various effluents into the river. Microplastics with fibrous shapes were dominated, followed by fragment shapes in all the samples. The present study is a pioneering effort to count MPs in the Gomati and Saryu River systems.

Keywords: freshwater, Gomati, microplastics, Saryu, sediment

Procedia PDF Downloads 73
5888 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 111
5887 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 339
5886 Parameters Estimation of Multidimensional Possibility Distributions

Authors: Sergey Sorokin, Irina Sorokina, Alexander Yazenin

Abstract:

We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.

Keywords: possibility distribution, parameters estimation, Maxmin u\E estimator, fuzzy model identification

Procedia PDF Downloads 466
5885 Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems

Authors: Atef Alaaeddine Sarraj, Brendan Jackman, Frank Walsh

Abstract:

Tomorrow’s car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time.

Keywords: advanced driver assistance system (ADAS), fish-eye, real-time, self-calibration

Procedia PDF Downloads 246
5884 Mobility-Aware Relay Selection in Two Hop Unmanned Aerial Vehicles Network

Authors: Tayyaba Hussain, Sobia Jangsher, Saqib Ali, Saqib Ejaz

Abstract:

Unmanned Aerial vehicles (UAV’s) have gained great popularity due to their remoteness, ease of deployment and high maneuverability in different applications like real-time surveillance, image capturing, weather atmospheric studies, disaster site monitoring and mapping. These applications can involve a real-time communication with the ground station. However, altitude and mobility possess a few challenges for the communication. UAV’s at high altitude usually require more transmit power. One possible solution can be with the use of multi hops (UAV’s acting as relays) and exploiting the mobility pattern of the UAV’s. In this paper, we studied a relay (UAV’s acting as relays) selection for a reliable transmission to a destination UAV. We exploit the mobility information of the UAV’s to propose a Mobility-Aware Relay Selection (MARS) algorithm with the objective of giving improved data rates. The results are compared with Non Mobility-Aware relay selection scheme and optimal values. Numerical results show that our proposed MARS algorithm gives 6% better achievable data rates for the mobile UAV’s as compared with Non MobilityAware relay selection scheme. On average a decrease of 20.2% in data rate is achieved with MARS as compared with SDP solver in Yalmip.

Keywords: mobility aware, relay selection, time division multiple acess, unmanned aerial vehicle

Procedia PDF Downloads 233
5883 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)

Authors: Ayobami Solomon Popoola

Abstract:

Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.

Keywords: blanching, kinetics, sugar losses, water yam

Procedia PDF Downloads 162
5882 Scour Damaged Detection of Bridge Piers Using Vibration Analysis - Numerical Study of a Bridge

Authors: Solaine Hachem, Frédéric Bourquin, Dominique Siegert

Abstract:

The brutal collapse of bridges is mainly due to scour. Indeed, the soil erosion in the riverbed around a pier modifies the embedding conditions of the structure, reduces its overall stiffness and threatens its stability. Hence, finding an efficient technique that allows early scour detection becomes mandatory. Vibration analysis is an indirect method for scour detection that relies on real-time monitoring of the bridge. It tends to indicate the presence of a scour based on its consequences on the stability of the structure and its dynamic response. Most of the research in this field has focused on the dynamic behavior of a single pile and has examined the depth of the scour. In this paper, a bridge is fully modeled with all piles and spans and the scour is represented by a reduction in the foundation's stiffnesses. This work aims to identify the vibration modes sensitive to the rigidity’s loss in the foundations so that their variations can be considered as a scour indicator: the decrease in soil-structure interaction rigidity leads to a decrease in the natural frequencies’ values. By using the first-order perturbation method, the expression of sensitivity, which depends only on the selected vibration modes, is established to determine the deficiency of foundations stiffnesses. The solutions are obtained by using the singular value decomposition method for the regularization of the inverse problem. The propagation of uncertainties is also calculated to verify the efficiency of the inverse problem method. Numerical simulations describing different scenarios of scour are investigated on a simplified model of a real composite steel-concrete bridge located in France. The results of the modal analysis show that the modes corresponding to in-plane and out-of-plane piers vibrations are sensitive to the loss of foundation stiffness. While the deck bending modes are not affected by this damage.

Keywords: bridge’s piers, inverse problems, modal sensitivity, scour detection, vibration analysis

Procedia PDF Downloads 98
5881 Titanium Alloys for Cryogenic Gas Bottle Applications: A Comparative Study

Authors: Bhanu Pant, Sanjay H. Upadhyay

Abstract:

Titanium alloys, owing to their high specific strength coupled with excellent resistance to corrosion in many severe environments, find extensive usage in the aerospace sector. Alpha and beta lean Titanium alloys have an additional characteristic of exhibiting high toughness with an NTS/ UTS ratio greater than one down to liquid oxygen and liquid helium temperatures. The cryogenic stage of high-performance rockets utilizes cryo-fluid submerged pressurizing tanks to improve volume to mass performance factor. A superior volume-to-mass ratio is achieved for LH2-submerged pressurizing tanks as compared to those submerged in LOX. Such high-efficiency tanks for LH2 submerged application necessitate the use of difficult to process alpha type Ti5Al2.5Sn-ELI alloy, which requires close control of process parameters to develop the tanks. In the present paper, a comparison of this alpha-type cryogenic Titanium alloy has been brought out with conventional alpha-beta Ti6Al4V-ELI alloy, which is usable up to LOX temperatures. Specific challenges faced during the development of these cryogenic pressurizing tanks for a launch vehicle based on the author's experience are included in the paper on the comparatively lesser-studied alpha Ti5Al2.5Sn-ELI alloy.

Keywords: cryogenic tanks, titanium Alloys, NTS/UTS ratio, alpha and alpha-beta ELI alloys

Procedia PDF Downloads 55
5880 Application of Cube IQ Software to Optimize Heterogeneous Packing Products in Logistics Cargo and Minimize Transportation Cost

Authors: Muhammad Ganda Wiratama

Abstract:

XYZ company is one of the upstream chemical companies that produce chemical products such as NaOH, HCl, NaClO, VCM, EDC, and PVC for downstream companies. The products are shipped by land using trucks and sea lanes using ship mode. Especially for solid products such as flake caustic soda (F-NaOH) and PVC resin, the products are sold in loose bag packing and palletize packing (packed in pallet). The focus of this study is to increase the number of items that can be loaded in pallet packaging on the company's logistics vehicle. This is very difficult because on this packaging, the dimensions or size of the material to be loaded become larger and certainly much heavier than the loose bag packing. This factor causes the arrangement and handling of materials in the mode of transportation more difficult. In this case, it is difficult to load a different type of volume packing pallet dimension in one truck or container. By using the Cube-IQ software, it is hoped that the planning of stuffing activity material by pallet can become easier in optimizing the existing space with various possible combinations of possibilities. In addition, the output of this software can also be used as a reference for operators in the material handling include the order and orientation of materials contained in the truck or container. The more optimal contents of logistics cargo, then transportation costs can also be minimized.

Keywords: loading activity, container loading, palletize product, simulation

Procedia PDF Downloads 293
5879 A Leader-Follower Kinematic-Based Control System for a Cable-Driven Hyper-Redundant Manipulator

Authors: Abolfazl Zaraki, Yoshikatsu Hayashi, Harry Thorpe, Vincent Strong, Gisle-Andre Larsen, William Holderbaum

Abstract:

Thanks to the high maneuverability of the cable-driven hyper-redundant manipulators (HRMs), this class of robots has shown a superior capability in highly confined and unstructured space applications. Although the large number of degrees of freedom (DOF) of HRMs enhances the motion flexibility and the robot’s reachability range, it highly increases the complexity of the kinematic configuration which makes the kinematic control problem very challenging or even impossible to solve. This paper presents our current progress achieved on the development of a kinematic-based leader-follower control system which is designed to control not only the robot’s body posture but also to control the trajectory of the robot’s movement in a semi-autonomous manner (the human operator is retained in the robot’s control loop). To obtain the forward kinematic model, the coordinate frames are established by the classical Denavit–Hartenburg (D-H) convention for a hyper-redundant serial manipulator which has a controlled cables-driven mechanism. To solve the inverse kinematics of the robot, unlike the conventional methods, a leader-follower mechanism, based on the sequential inverse kinematic, is followed. Using this mechanism, the inverse kinematic problem is solved for all sequential joints starting from the head joint to the base joint of the robot. To verify the kinematic design and simulate the robot motion, the MATLAB robotic toolbox is used. The simulation result demonstrated the promising capability of the proposed leader-follower control system in controlling the robot motion and trajectory in our confined space application.

Keywords: hyper-redundant robots, kinematic analysis, semi-autonomous control, serial manipulators

Procedia PDF Downloads 153
5878 Using Q-Learning to Auto-Tune PID Controller Gains for Online Quadcopter Altitude Stabilization

Authors: Y. Alrubyli

Abstract:

Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors, from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge instead of letting the online reinforcement learning algorithm wander for a while to tune the PID gains, altitude stabilization can be achieved faster. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.

Keywords: reinforcement learning, Q-leanring, online learning, PID tuning, unmanned aerial vehicle, quadcopter

Procedia PDF Downloads 166