Search results for: Riemann problem
5678 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves
Authors: Aymen Laadhari
Abstract:
During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.Keywords: eulerian, level set, newton, valve
Procedia PDF Downloads 2785677 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 3175676 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 2965675 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 2235674 Academic Motivation Maintenance for Students While Solving Mathematical Problems in the Middle School
Authors: M. Rodionov, Z. Dedovets
Abstract:
The level and type of student academic motivation are the key factors in their development and determine the effectiveness of their education. Improving motivation is very important with regard to courses on middle school mathematics. This article examines the general position regarding the practice of academic motivation. It also examines the particular features of mathematical problem solving in a school setting.Keywords: teaching strategy, mathematics, motivation, student
Procedia PDF Downloads 4455673 Legal Problems with the Thai Political Party Establishment
Authors: Paiboon Chuwatthanakij
Abstract:
Each of the countries around the world has different ways of management and many of them depend on people to administrate their country. Thailand, for example, empowers the sovereignty of Thai people under constitution; however, our Thai voting system is not able to flow fast enough under the current Political management system. The sovereignty of Thai people is addressing this problem through representatives during current elections, in order to set a new policy for the countries ideology to change in the House and the Cabinet. This is particularly important in a democracy to be developed under our current political institution. The Organic Act on Political Parties 2007 is the establishment we have today that is causing confrontations within the establishment. There are many political parties that will soon be abolished. Many political parties have already been subsidized. This research study is to analyze the legal problems with the political party establishment under the Organic Act on Political Parties 2007. This will focus on the freedom of each political establishment compared to an effective political operation. Textbooks and academic papers will be referenced from studies home and abroad. The study revealed that Organic Act on Political Parties 2007 has strict provisions on the political structure over the number of members and the number of branches involved within political parties system. Such operations shall be completed within one year; but under the existing laws the small parties are not able to participate with the bigger parties. The cities are capable of fulfilling small political party requirements but fail to become coalesced because the current laws won't allow them to be united as one. It is important to allow all independent political parties to join our current political structure. Board members can’t help the smaller parties to become a large organization under the existing Thai laws. Creating a new establishment that functions efficiently throughout all branches would be one solution to these legal problems between all political parties. With this new operation, individual political parties can participate with the bigger parties during elections. Until current political institutions change their system to accommodate public opinion, these current Thai laws will continue to be a problem with all political parties in Thailand.Keywords: coalesced, political party, sovereignty, elections
Procedia PDF Downloads 3145672 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method
Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya
Abstract:
This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.Keywords: particle size reduction, micromixer, FDM modelling, wet etching
Procedia PDF Downloads 4315671 Health Ramifications of Workplace Bullying: Gender, Race and Sexual Orientation as Risk Factors
Authors: Kathleen Canul
Abstract:
Bullying is on the rise according to several recent studies. Workplace bullying has garnered less attention than other forms yet incidence rates range from 35-45%. The consequences of being bullied at work are broad, ranging from physiological to psychological to occupational. As the bullying progresses, employees begin to exhibit physical and psychological symptoms. Blood pressure rises, along with other cardiac related concerns. For men, covert coping with job unfairness was associated with a four-fold risk of heart attack and death. Gastrointestinal distress, headaches, muscle tension, sleep disorders and exhaustion are also common. Workplace bullying appears to contribute to the risk of subsequent psychotropic medication, as well. Emotionally, anxiety and depression increase along with lowered self-esteem and problems concentrating on the duties of the job. In an attempt to cope, individuals may succumb to unhealthy practices involving food, alcohol and other drugs. Patterns of bullying vary by gender, race, and ethnicity, as well as sexual orientation, with women, ethnic minorities and LGBTQ employees reporting higher rates of bullying in the workplace. Not only is this an issue of inequity on the job, but also a problem of health disparities as there are few mental health professionals confident and competent in dealing with workplace bullying issues, and the lack of culturally competent clinicians exacerbates this inequality in receiving adequate care. Alone, the topic of workplace bullying is not unique; however, the diverse experiences of underrepresented groups who disproportionately are affected on the job and suffer untreated, health related concerns represent a significant and emerging problem requiring attention. Conference participants who have experienced, witnessed or help those bullied on the job would benefit most from this review of the literature on the consequences of bullying experienced by diverse and underrepresented groups in the workplace.Keywords: bullying, ethnic minorities, health disparities, workplace conflict
Procedia PDF Downloads 2805670 Accidental Electrocution, Reconstruction of Events
Authors: Y. P. Raghavendra Babu
Abstract:
Electrocution is a common cause of morbidity and mortality as electricity is an indispensible part of today’s World. Deaths due to electrocution which are witnessed do not pose a problem at the manner and cause of death. However un-witnessed deaths can raise suspicion of manner of death. A case of fatal electrocution is reported here which was diagnosed to be accidental in manner with the help of reconstruction of events by proper investigation.Keywords: electrocution, manner of death, reconstruction of events, health information
Procedia PDF Downloads 2595669 Comparative Assessment of Microplastic Pollution in Surface Water and Sediment of the Gomati and Saryu Rivers, India
Authors: Amit K. Mishra, Jaswant Singh
Abstract:
The menace of plastic, which significantly pollutes the aquatic environment, has emerged as a global problem. There is an emerging concern about microplastics (MPs) accumulation in aquatic ecosystems. It is familiar to everyone that the ultimate end for most of the plastic debris is the ocean. Rivers are the efficient carriers for transferring MPs from terrestrial to aquatic, further from upstream to downstream areas, and ultimately to oceans. The root cause study can provide an effective solution to a problem; hence, tracing of MPs in the riverine system can illustrate the long-term microplastic pollution. This study aimed to investigate the occurrence and distribution of microplastic contamination in surface water and sediment of the two major river systems of Uttar Pradesh, India. One is the Gomti River, Lucknow, a tributary of the Ganga, and the second is the Saryu River, the lower part of the Ghagra River, which flows through the city of Ayodhya. In this study, the distribution and abundance of MPs in surface water and sediments of two rivers were compared. Samples of water and sediment were collected from different (four from each river) sampling stations in the river catchment of two rivers. Plastic particles were classified according to type, shape, and color. In this study, 1523 (average abundance 254) and 143 (average abundance 26) microplastics were identified in all studied sites in the Gomati River and Saryu River, respectively. Observations on samples of water showed that the average MPs concentration was 392 (±69.6) and 63 ((±18.9) particles per 50l of water, whereas the sediment sample showed that the average MPs concentration was 116 (±42.9) and 46 (±12.5) particles per 250gm of dry sediment in the Gomati River and Saryu River, respectively. The high concentration of microplastics in the Lucknow area can be attributed to human activities, population density, and the entry of various effluents into the river. Microplastics with fibrous shapes were dominated, followed by fragment shapes in all the samples. The present study is a pioneering effort to count MPs in the Gomati and Saryu River systems.Keywords: freshwater, Gomati, microplastics, Saryu, sediment
Procedia PDF Downloads 825668 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1145667 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3435666 Parameters Estimation of Multidimensional Possibility Distributions
Authors: Sergey Sorokin, Irina Sorokina, Alexander Yazenin
Abstract:
We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.Keywords: possibility distribution, parameters estimation, Maxmin u\E estimator, fuzzy model identification
Procedia PDF Downloads 4705665 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)
Authors: Ayobami Solomon Popoola
Abstract:
Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.Keywords: blanching, kinetics, sugar losses, water yam
Procedia PDF Downloads 1655664 Scour Damaged Detection of Bridge Piers Using Vibration Analysis - Numerical Study of a Bridge
Authors: Solaine Hachem, Frédéric Bourquin, Dominique Siegert
Abstract:
The brutal collapse of bridges is mainly due to scour. Indeed, the soil erosion in the riverbed around a pier modifies the embedding conditions of the structure, reduces its overall stiffness and threatens its stability. Hence, finding an efficient technique that allows early scour detection becomes mandatory. Vibration analysis is an indirect method for scour detection that relies on real-time monitoring of the bridge. It tends to indicate the presence of a scour based on its consequences on the stability of the structure and its dynamic response. Most of the research in this field has focused on the dynamic behavior of a single pile and has examined the depth of the scour. In this paper, a bridge is fully modeled with all piles and spans and the scour is represented by a reduction in the foundation's stiffnesses. This work aims to identify the vibration modes sensitive to the rigidity’s loss in the foundations so that their variations can be considered as a scour indicator: the decrease in soil-structure interaction rigidity leads to a decrease in the natural frequencies’ values. By using the first-order perturbation method, the expression of sensitivity, which depends only on the selected vibration modes, is established to determine the deficiency of foundations stiffnesses. The solutions are obtained by using the singular value decomposition method for the regularization of the inverse problem. The propagation of uncertainties is also calculated to verify the efficiency of the inverse problem method. Numerical simulations describing different scenarios of scour are investigated on a simplified model of a real composite steel-concrete bridge located in France. The results of the modal analysis show that the modes corresponding to in-plane and out-of-plane piers vibrations are sensitive to the loss of foundation stiffness. While the deck bending modes are not affected by this damage.Keywords: bridge’s piers, inverse problems, modal sensitivity, scour detection, vibration analysis
Procedia PDF Downloads 1045663 A Leader-Follower Kinematic-Based Control System for a Cable-Driven Hyper-Redundant Manipulator
Authors: Abolfazl Zaraki, Yoshikatsu Hayashi, Harry Thorpe, Vincent Strong, Gisle-Andre Larsen, William Holderbaum
Abstract:
Thanks to the high maneuverability of the cable-driven hyper-redundant manipulators (HRMs), this class of robots has shown a superior capability in highly confined and unstructured space applications. Although the large number of degrees of freedom (DOF) of HRMs enhances the motion flexibility and the robot’s reachability range, it highly increases the complexity of the kinematic configuration which makes the kinematic control problem very challenging or even impossible to solve. This paper presents our current progress achieved on the development of a kinematic-based leader-follower control system which is designed to control not only the robot’s body posture but also to control the trajectory of the robot’s movement in a semi-autonomous manner (the human operator is retained in the robot’s control loop). To obtain the forward kinematic model, the coordinate frames are established by the classical Denavit–Hartenburg (D-H) convention for a hyper-redundant serial manipulator which has a controlled cables-driven mechanism. To solve the inverse kinematics of the robot, unlike the conventional methods, a leader-follower mechanism, based on the sequential inverse kinematic, is followed. Using this mechanism, the inverse kinematic problem is solved for all sequential joints starting from the head joint to the base joint of the robot. To verify the kinematic design and simulate the robot motion, the MATLAB robotic toolbox is used. The simulation result demonstrated the promising capability of the proposed leader-follower control system in controlling the robot motion and trajectory in our confined space application.Keywords: hyper-redundant robots, kinematic analysis, semi-autonomous control, serial manipulators
Procedia PDF Downloads 1575662 Effect of Facilitation in a Problem-Based Environment on the Metacognition, Motivation and Self-Directed Learning in Nursing: A Quasi-Experimental Study among Nurse Students in Tanzania
Authors: Walter M. Millanzi, Stephen M. Kibusi
Abstract:
Background: Currently, there has been a progressive shortage not only to the number but also the quality of medical practitioners for the most of nursing. Despite that, those who are present exhibit unethical and illegal practices, under standard care and malpractices. The concern is raised in the ways they are prepared, or there might be something missing in nursing curricula or how it is delivered. There is a need for transforming or testing new teaching modalities to enhance competent health workforces. Objective: to investigate the Effect of Facilitation in a Problem-based Environment (FPBE) on metacognition, self-directed learning and learning motivation to undergraduate nurse student in Tanzanian higher learning institutions. Methods: quasi-experimental study (quantitative research approach). A purposive sampling technique was employed to select institutions and achieving a sample size of 401 participants (interventional = 134 and control = 267). Self-administered semi-structured questionnaire; was the main data collection methods and the Statistical Package for Service Solution (v. 20) software program was used for data entry, data analysis, and presentations. Results: The pre-post test results between groups indicated noticeably significant change on metacognition in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05). SDL in an intervention (M = 1.52, SD = 0.501) against the control (M = 1.40, SD = 0.490), t (399) = 2.398, p < 0.05. Motivation to learn in an intervention (M = 62.67, SD = 14.14) and the control (n = 267, M = 57.75), t (399) = 2.907, p < 0.01). A FPBE teaching pedagogy, was observed to be effective on the metacognition (AOR = 1.603, p < 0.05), SDL (OR = 1.729, p < 0.05) and Intrinsic motivation in learning (AOR = 1.720, p < 0.05) against conventional teaching pedagogy. Needless, was less likely to enhance Extrinsic motivation (AOR = 0.676, p > 0.05) and Amotivation (AOR = 0.538, p > 0.05). Conclusion and recommendation: FPBE teaching pedagogy, can improve student’s metacognition, self-directed learning and intrinsic motivation to learn among nurse students. Nursing curricula developers should incorporate it to produce 21st century competent and qualified nurses.Keywords: facilitation, metacognition, motivation, self-directed
Procedia PDF Downloads 1885661 Boundary Feedback Stabilization of an Overhead Crane Model
Authors: Abdelhadi Elharfi
Abstract:
A problem of boundary feedback (exponential) stabilization of an overhead crane model represented by a PDE is considered. For any $r>0$, the exponential stability at the desired decay rate $r$ is solved in semi group setting by a collocated-type stabiliser of a target system combined with a term involving the solution of an appropriate PDE.Keywords: feedback stabilization, semi group and generator, overhead crane system
Procedia PDF Downloads 4065660 An Efficient Robot Navigation Model in a Multi-Target Domain amidst Static and Dynamic Obstacles
Authors: Michael Ayomoh, Adriaan Roux, Oyindamola Omotuyi
Abstract:
This paper presents an efficient robot navigation model in a multi-target domain amidst static and dynamic workspace obstacles. The problem is that of developing an optimal algorithm to minimize the total travel time of a robot as it visits all target points within its task domain amidst unknown workspace obstacles and finally return to its initial position. In solving this problem, a classical algorithm was first developed to compute the optimal number of paths to be travelled by the robot amidst the network of paths. The principle of shortest distance between robot and targets was used to compute the target point visitation order amidst workspace obstacles. Algorithm premised on the standard polar coordinate system was developed to determine the length of obstacles encountered by the robot hence giving room for a geometrical estimation of the total surface area occupied by the obstacle especially when classified as a relevant obstacle i.e. obstacle that lies in between a robot and its potential visitation point. A stochastic model was developed and used to estimate the likelihood of a dynamic obstacle bumping into the robot’s navigation path and finally, the navigation/obstacle avoidance algorithm was hinged on the hybrid virtual force field (HVFF) method. Significant modelling constraints herein include the choice of navigation path to selected target points, the possible presence of static obstacles along a desired navigation path and the likelihood of encountering a dynamic obstacle along the robot’s path and the chances of it remaining at this position as a static obstacle hence resulting in a case of re-routing after routing. The proposed algorithm demonstrated a high potential for optimal solution in terms of efficiency and effectiveness.Keywords: multi-target, mobile robot, optimal path, static obstacles, dynamic obstacles
Procedia PDF Downloads 2815659 Detection of Cryptosporidium Oocysts by Acid-Fast Staining Method and PCR in Surface Water from Tehran, Iran
Authors: Mohamad Mohsen Homayouni, Niloofar Taghipour, Ahmad Reza Memar, Niloofar Khalaji, Hamed Kiani, Seyyed Javad Seyyed Tabaei
Abstract:
Background and Objective: Cryptosporidium is a coccidian protozoan parasite; its oocysts in surface water are a global health problem. Due to the low number of parasites in the water resources and the lack of laboratory culture, rapid and sensitive method for detection of the organism in the water resources is necessarily required. We applied modified acid-fast staining and PCR for the detection of the Cryptosporidium spp. and analysed the genotypes in 55 samples collected from surface water. Methods: Over a period of nine months, 55 surface water samples were collected from the five rivers in Tehran, Iran. The samples were filtered by using cellulose acetate membrane filters. By acid fast method, initial identification of Cryptosporidium oocyst were carried out on surface water samples. Then, nested PCR assay was designed for the specific amplification and analysed the genotypes. Results: Modified Ziehl-Neelsen method revealed 5–20 Cryptosporidium oocysts detected per 10 Liter. Five out of the 55 (9.09%) surface water samples were found positive for Cryptosporidium spp. by Ziehl-Neelsen test and seven (12.7%) were found positive by nested PCR. The staining results were consistent with PCR. Seven Cryptosporidium PCR products were successfully sequenced and five gp60 subtypes were detected. Our finding of gp60 gene revealed that all of the positive isolates were Cryptosporidium parvum and belonged to subtype families IIa and IId. Conclusion: Our investigations were showed that collection of water samples were contaminated by Cryptosporidium, with potential hazards for the significant health problem. This study provides the first report on detection and genotyping of Cryptosporidium species from surface water samples in Iran, and its result confirmed the low clinical incidence of this parasite on the community.Keywords: Cryptosporidium spp., membrane filtration, subtype, surface water, Iran
Procedia PDF Downloads 4165658 The Emancipatory Methodological Approach to the Organizational Problems Management
Authors: Slavica P. Petrovic
Abstract:
One of the key dimensions of management problems in organizations refers to the relations between stakeholders. The management problems that are characterized by conflict and coercion, in which participants do not agree on the ends and means, in which different groups, i.e., individuals, strive to – using the power they have – impose on others their favoured strategy and decisions represent the relevant research subject. Creatively managing the coercive problems in organizations, in which the sources of power can be identified, implies the emancipatory paradigm and the use of corresponding systems methodology. The main research aim is to critically reassess the theoretical foundations and methodological and methodical development of Critical Systems Heuristics (CSH) – as a valid representative of the emancipatory paradigm – in order to determine the conditions, ways, and achievements of its application in managing the coercive problems in organizations. The basic hypothesis is that CSH, as the emancipatory methodology, given its own theoretical foundations and methodological-methodical development, can be employed in a scientifically based and practically useful manner in creative addressing the coercive problems. The scientific instrumentarium corresponding to this research aim is critical systems thinking with its three key commitments to: a) Critical awareness of the strengths and weaknesses of each research instrument (theory, methodology, method, technique, model) for structuring the problem situations in organizations, b) Improvement of managing the coercive problems in organizations, and c) Pluralism – respect the different perceptions and interpretations of problem situations, and enable the combined use of research instruments. The relevant research result is that CSH – considering its theoretical foundations, methodological and methodical development – enables to reveal the normative content of the proposed or existing designs of organizational systems. Accordingly, it can be concluded that through the use of critically heuristic categories and dialectical debate between those involved and those affected by the designs, but who are not included in designing organizational systems, CSH endeavours to – in the application – support the process of improving position of all stakeholders.Keywords: coercion and conflict in organizations, creative management, critical systems heuristics, the emancipatory systems methodology
Procedia PDF Downloads 4425657 Inverse Matrix in the Theory of Dynamical Systems
Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova
Abstract:
In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.Keywords: dynamic system, transfer matrix, inverse matrix, modeling
Procedia PDF Downloads 5165656 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia
Authors: Nino Paresashvili, Nino Abesadze
Abstract:
The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.Keywords: unemployment, analysis, methods, tendencies, regulation mechanisms
Procedia PDF Downloads 3775655 Violent Conflict and the Protection of Women from Sex and Gender-Based Violence: A Third World Feminist Critique of the United Nations Women, Peace, and Security Agenda
Authors: Seember Susan Aondoakura
Abstract:
This paper examines the international legal framework established to address the challenges women and girls experience in situations of violent conflict. The United Nations (UN) women, peace, and security agenda (hereafter WPS agenda, the Agenda) aspire to make wars safer for women. It recognizes women's agency in armed conflict and their victimization and formulates measures for their protection. The Agenda also acknowledges women's participation in conflict transformation and post-conflict reconstruction. It also calls for the involvement of women in conflict transformation, encourages the protection of women from sex and gender-based violence (SGBV), and provides relief and recovery from conflict-related SGBV. Using Third World Critical Feminist Theory, this paper argues that the WPS agenda overly focus on the protection of women from SGBV occurring in the less developed and conflict-ridden states in the global south, obscures the complicity of western states and economies to the problem, and silences the privileges that such states derive from war economies that continue to fuel conflict. This protectionist approach of the UN also obliterates other equally pressing problems in need of attention, like the high rates of economic degradation in conflict-ravaged societies of the global south. Prioritising protection also 'others' the problem, obliterating any sense of interconnections across geographical locations and situating women in the less developed economies of the global south as the victims and their men as the perpetrators. Prioritising protection ultimately situates western societies as saviours of Third World women with no recourse to their role in engendering and sustaining war. The paper demonstrates that this saviour mentality obliterates chances of any meaningful coalition between the local and the international in framing and addressing the issue, as solutions are formulated from a specific lens—the white hegemonic lens.Keywords: conflict, protection, security, SGBV
Procedia PDF Downloads 965654 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures
Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani
Abstract:
Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.Keywords: semantic search engine, Google indexing, query expansion, similarity measures
Procedia PDF Downloads 4255653 Time, Uncertainty, and Technological Innovation
Authors: Xavier Everaert
Abstract:
Ever since the publication of “The Problem of Social” cost, Coasean insights on externalities, transaction costs, and the reciprocal nature of harms, have been widely debated. What has been largely neglected however, is the role of technological innovation in the mitigation of negative externalities or transaction costs. Incorporating future uncertainty about negligence standards or expected restitution costs and the profit opportunities these uncertainties reveal to entrepreneurs, allow us to frame problems regarding social costs within the reality of rapid technological evolution.Keywords: environmental law and economics, entrepreneurship, commons, pollution, wildlife
Procedia PDF Downloads 4215652 Kemmer Oscillator in Cosmic String Background
Authors: N. Messai, A. Boumali
Abstract:
In this work, we aim to solve the two dimensional Kemmer equation including Dirac oscillator interaction term, in the background space-time generated by a cosmic string which is submitted to an uniform magnetic field. Eigenfunctions and eigenvalues of our problem have been found and the influence of the cosmic string space-time on the energy spectrum has been analyzed.Keywords: Kemmer oscillator, cosmic string, Dirac oscillator, eigenfunctions
Procedia PDF Downloads 5845651 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 215650 The Changes of Chemical Composition of Rice Straw Treated by a Biodecomposer Developed from Rumen Bacterial of Buffalo
Authors: A. Natsir, M. Nadir, S. Syahrir, A. Mujnisa
Abstract:
In tropical countries such as in Indonesia, rice straw plays an important role in fulfilling the needs of feed for ruminant, especially during the dry season in which the availability of forage is very limited. However, the main problem of using rice straw as a feedstuff is low digestibility due to the existence of the links between lignin and cellulose or hemicellulose, and imbalance of its minerals content. One alternative to solve this problem is by application of biodecomposer (BS) derived from rumen bacterial of the ruminant. This study was designed to assess the effects of BS application on the changes of the chemical composition of rice straw. Four adults local buffalo raised under typical feeding conditions were used as a source of inoculum for BS development. The animal was fed for a month with a diet consisted of rice straw and elephant grass before taking rumen fluid samples. Samples of rumen fluid were inoculated in the carboxymethyl cellulose (CMC) media under anaerobic condition for 48 hours at 37°C. The mixture of CMC media and microbes are ready to be used as a biodecomposer following incubation of the mixture under anaerobic condition for 7 days at 45°C. The effectiveness of BS then assessed by applying the BS on the straw according to completely randomized design consisted of four treatments and three replication. One hundred g of ground coarse rice straw was used as the substrate. The BS was applied to the rice straw substrate with the following composition: Rice straw without BS (P0), rice straw + 5% BS (P1), rice straw +10% BS (P2), and rice straw + 15% BS. The mixture of rice straw and BS then fermented under anaerobic for four weeks. Following the fermentation, the chemical composition of rice straw was evaluated. The results indicated that the crude protein content of rice straw significantly increased (P < 0.05) as the level of BS increased. On the other hand, the concentration of crude fiber of the rice straw was significantly decreased (P < 0.05) as the level of BS increased. Other nutrients such as minerals did not change (P > 0.05) due to the treatments. In conclusion, application of BS developed from rumen bacterial of buffalo has a promising prospect to be used as a biological agent to improve the quality of rice straw as feeding for ruminant.Keywords: biodecomposer, local buffalo, rumen microbial, chemical composition
Procedia PDF Downloads 2085649 Phytoextraction of Heavy Metals in a Contaminated Site in Assam, India Using Indian Pennywort and Fenugreek: An Experimental Study
Authors: Chinumani Choudhury
Abstract:
Heavy metal contamination is an alarming problem, which poses a serious risk to human health and the surrounding geology. Soils get contaminated with heavy metals due to the un-regularized industrial discharge of the toxic metal-rich effluents. Under such a condition, the remediation of the contaminated sites becomes imperative for a sustainable, safe, and healthy environment. Phytoextraction, which involves the removal of heavy metals from the soil through root absorption and uptake, is a viable remediation technique, which ensures extraction of the toxic inorganic compound available in the soil even at low concentrations. The soil present in the Silghat Region of Assam, India, is mostly contaminated with Zinc (Zn) and Lead (Pb), having concentrations as high as to cause a serious environmental problem if proper measures are not taken. In the present study, an extensive experimental study was carried out to understand the effectiveness of two commonly planted trees in Assam, namely, i) Indian Pennywort and ii) Fenugreek, in the removal of heavy metals from the contaminated soil. The basic characterization of the soil in the contaminated site of the Silghat region was performed and the field concentration of Zn and Pb was recorded. Various long-term laboratory pot tests were carried out by sowing the seeds of Indian Pennywort and Fenugreek in a soil, which was spiked, with a very high dosage of Zn and Pb. The tests were carried out for different concentration of a particular heavy metal and the individual effectiveness in the absorption of the heavy metal by the plants were studied. The concentration of the soil was monitored regularly to assess the rate of depletion and the simultaneous uptake of the heavy metal from the soil to the plant. The amount of heavy metal uptake by the plant was also quantified by analyzing the plant sample at the end of the testing period. Finally, the study throws light on the applicability of the studied plants in the field for effective remediation of the contaminated sites of Assam.Keywords: phytoextraction, heavy-metals, Indian pennywort, fenugreek
Procedia PDF Downloads 120