Search results for: maximum input
859 The Effect of Degraded Shock Absorbers on the Safety-Critical Stationary and Non-Stationary Lateral Dynamics of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
The average age of passenger cars is rising steadily around the world. Older vehicles are more sensitive to the degradation of chassis components. A higher age and a higher mileage of passenger cars correlate with an increased failure rate of vehicle shock absorbers. The most common degradation mechanism of vehicle shock absorbers is the loss of oil and gas. It is not yet fully understood how the loss of oil and gas in twin-tube shock absorbers affects the lateral dynamics of passenger cars. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers of passenger cars on their safety-critical lateral dynamics. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was used to simulate stationary cornering and steering wheel angle step maneuvers on road classes A to D. The simulations were carried out in a realistic parameter space in order to demonstrate the influence of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the understeer gradient of vehicles. For stationary lateral dynamics, degraded shock absorbers for high road excitations reduce the maximum lateral accelerations. Degraded rear axle shock absorbers can change the understeer gradient of a vehicle in the direction of oversteer. Degraded shock absorbers also lead to increased rolling angles. Furthermore, degraded shock absorbers have a major impact on driving stability during steering wheel angle steps. Degraded rear axle shock absorbers, in particular, can lead to unstable handling. Especially the tire stiffness, the unsprung mass and the stabilizer stiffness influence the effect of degraded shock absorbers on the lateral dynamics of passenger cars.Keywords: driving dynamics, numerical simulation, road safety, shock absorber degradation, stationary and nonstationary lateral dynamics.
Procedia PDF Downloads 13858 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru
Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama
Abstract:
There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.Keywords: water economy, simulation, modeling, integration
Procedia PDF Downloads 155857 Reactive Power Control Strategy for Z-Source Inverter Based Reconfigurable Photovoltaic Microgrid Architectures
Authors: Reshan Perera, Sarith Munasinghe, Himali Lakshika, Yasith Perera, Hasitha Walakadawattage, Udayanga Hemapala
Abstract:
This research presents a reconfigurable architecture for residential microgrid systems utilizing Z-Source Inverter (ZSI) to optimize solar photovoltaic (SPV) system utilization and enhance grid resilience. The proposed system addresses challenges associated with high solar power penetration through various modes, including current control, voltage-frequency control, and reactive power control. It ensures uninterrupted power supply during grid faults, providing flexibility and reliability for grid-connected SPV customers. Challenges and opportunities in reactive power control for microgrids are explored, with simulation results and case studies validating proposed strategies. From a control and power perspective, the ZSI-based inverter enhances safety, reduces failures, and improves power quality compared to traditional inverters. Operating seamlessly in grid-connected and islanded modes guarantees continuous power supply during grid disturbances. Moreover, the research addresses power quality issues in long distribution feeders during off-peak and night-peak hours or fault conditions. Using the Distributed Static Synchronous Compensator (DSTATCOM) for voltage stability, the control objective is nighttime voltage regulation at the Point of Common Coupling (PCC). In this mode, disconnection of PV panels, batteries, and the battery controller allows the ZSI to operate in voltage-regulating mode, with critical loads remaining connected. The study introduces a structured controller for Reactive Power Controlling mode, contributing to a comprehensive and adaptable solution for residential microgrid systems. Mathematical modeling and simulations confirm successful maximum power extraction, controlled voltage, and smooth voltage-frequency regulation.Keywords: reconfigurable architecture, solar photovoltaic, microgrids, z-source inverter, STATCOM, power quality, battery storage system
Procedia PDF Downloads 10856 The Effect of Tele Rehabilitation Training on Complications of Hip Osteoarthritis: A Quasi-Experimental Study
Authors: Mahnaz Seyedoshohadaee, Azadeh Nematolahi, Parsa Rahimi
Abstract:
Introduction: Rehabilitation training after hip joint surgery is one of the priorities of nursing, which can be helpful in today's world with the advancement of technology. This study was conducted with the aim of the effect of Tele rehabilitation Education on outcomes of hip osteoarthritis. Methods: The present study was a semi-experimental study that was conducted on patients after hip replacement in the first half of 2023. To perform the work, 70 patients who were available were included in the study and were divided into two intervention and control groups by a nonrandom method. Inclusion criteria included: a maximum of 6 months had passed since the hip joint replacement, age between 30-70 years, the ability to follow instructions by the subject, the absence of accompanying orthopedic lesions such as fractures, and having access to the Internet, a smartphone, and the Skype program. Exclusion criteria were severe speech disorder and non-participation in a training session. The research tool included a demographic profile form and Hip disability and osteoarthritis outcome score (HOOS), which were completed by the patients before and after the training. Training for people in the intervention group in 4 sessions, including introduction of the disease, risk factors, symptoms, management of disease symptoms, medication, diet, appropriate exercises and pain relief methods, one session per week for 30 to 45 minutes in the groups 4 to 6 people were offered through Skype software. SPSS version 22 statistical software was used to analyze the data. Results: The average score of osteoarthritis outcomes in the patients before the intervention was 112.74±29.64 in the test group and 110.41±16.34 in the control group, which had no significant difference (P=0.682). After the intervention, it reached 85.25±21.43 and 109.94±15.74, respectively, and this difference was significant (P<0.001). The comparison of the average scores of osteoarthritis results in the test group indicated a significant difference from the pre-test to the post-test time (p<0.001). But in the control group, this difference was not significant (p=0.130). Conclusion: The results showed that Tele rehabilitation Education has a positive effect on reducing the outcomes of hip osteoarthritis, so it is recommended that nurses use Tele rehabilitation Education in their training in order to empower patients.Keywords: training, rehabilitation, hip osteoarthritides, patient, complications
Procedia PDF Downloads 4855 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 176854 Blood Profile, Organs, and Carcass Analysis and Performance of Broilers Fed Cowpea Testa Based Diet
Authors: O. J. Osunkeye, P. O. Fakolade, B. E. Olorede
Abstract:
Broilers productions depend on the provision of adequate and goo quality feed containing all the nutrients, including proteins, carbohydrate, fats, vitamins, minerals and water. All these nutrients have to be provided at a required amount to support maximum productivity and normal physiological functions and demands. Among these nutrients proteins are particularly important, since they are essential for meat and muscle production, optimum growth and health status. Poultry production industry in the developing countries is been threatened because of the over dependency on Soybean meal as one of the key/major conventional protein stuff for feeding livestock. Even the competition between man and livestock for Soybean and other protein sources made the price of this feed stuff to be on the increase. Hence the needs to seek for an alternative feed stuff which is cheap and less competitive. This study showed the blood profile, organ and carcass characteristics and performance of broilers fed with Cowpea Testa Meal (CTM) based diets. Four diets were formulated with Cowpea Testa replacing Soybean at 0%, 15%, 30%, and 50% graded levels. One hundred and twenty day-old unsexed broiler birds were allotted to these four treatments with 3 replicates of 10 birds per replicate. The results showed no significant differences in all the haematological parameters measured (P>0.05), the serum metabolites analysis revealed significant different in Cholesterol (99.8 mg/dl, 112.84 mg/dl, 131.07 mg/dl and 97.66 mg/dl respectively) (P<0.05) among others. There were significant differences within the diets for average daily weight gain, average feed intake and feed to gain ratio. The birds on control (0%) and CTM gained more weight than those fed with 30% and 50% CTM diets. The organs and carcass primal cuts of the broilers expressed significant different for the spleen (0.12 g, 0.09 g, 0.11 g and 0.14 g respectively), lungs (0.97 g, 0.72 g, 0.77 g and 1.01g respectively) and proventriculus (0.96 g, 0.99 g, 0.81 g and 0.85 g respectively) (P<0.05). For the carcass, there were no significant differences (P<0.05) in the breast, thigh, drumstick, wing and neck except for the Back (21.27 g, 21.04 g, 17.71 g, and 17.89 g respectively). In conclusion, CTM inclusion in broiler’s diet could be used as an alternative feed stuff in replacement of Soybean meal up to 15% without any adverse effects as revealed by the blood profile and to increase the growth performance of the birds.Keywords: physiological functions, cholesterol, blood profiles, CTM and carcass analysis
Procedia PDF Downloads 613853 Experiences of Social Participation among Community Elderly with Mild Cognitive Impairment: A Qualitative Research
Abstract:
Mild cognitive impairment (MCI) is a clinical stage that occurs between normal aging and dementia. Although MCI increases the risk of developing dementia, individuals with MCI may maintain stable cognitive function and even recover to a typical cognitive state. An intervention to prevent or delay the progression to dementia in individuals with MCI may involve promoting social engagement. Social participation is the engagement in socially relevant social exchanges and meaningful activities. Older adults with MCI may encounter restricted cognitive abilities, mood changes, and behavioral difficulties during social participation, influencing their willingness to engage. Therefore, this study aims to employ qualitative research methods to gain an in-depth comprehension of the authentic social participation experiences of older adults with mild cognitive impairment, which will establish a foundation for designing appropriate intervention programs. A phenomenological research was conducted. The study participants were selected using the purposive sampling method in combination with the maximum differentiation sampling strategy. Face-to-face semistructured interviews were conducted among 12 elderly individuals suffering from mild cognitive impairment in a community in Zhengzhou City from May to July 2023. Colaizzi 7-step method was used to analyze the data and extract the theme. The real experience of social participation in older adults with mild cognitive impairment can be summarized into 3 themes: (1) a single social relationship but a strong desire to participate, (2) a dual experience of social participation with both positive and negative aspects, (3) multiple barriers to social participation, including impaired memory capacity, heavy family responsibilities and lack of infrastructure. The study found that elderly individuals with mild cognitive impairment and one social interaction display an increased desire to engage in society. To improve social participation levels and reduce cognitive function decline, healthcare providers should work with relevant government agencies and the community to create a comprehensive social participation system. It is important for healthcare providers to note the social participation status of the elderly with mild cognitive impairment.Keywords: mild cognitive impairment, the elderly, social participation, qualitative research
Procedia PDF Downloads 92852 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 117851 Controlling RPV Embrittlement through Wet Annealing in Support of Life Extension
Authors: E. A. Krasikov
Abstract:
As a main barrier against radioactivity outlet reactor pressure vessel (RPV) is a key component in terms of NPP safety. Therefore, present-day demands in RPV reliability enhance have to be met by all possible actions for RPV in-service embrittlement mitigation. Annealing treatment is known to be the effective measure to restore the RPV metal properties deteriorated by neutron irradiation. There are two approaches to annealing. The first one is so-called ‘dry’ high temperature (~475°C) annealing. It allows obtaining practically complete recovery, but requires the removal of the reactor core and internals. External heat source (furnace) is required to carry out RPV heat treatment. The alternative approach is to anneal RPV at a maximum coolant temperature which can be obtained using the reactor core or primary circuit pumps while operating within the RPV design limits. This low temperature «wet» annealing, although it cannot be expected to produce complete recovery, is more attractive from the practical point of view especially in cases when the removal of the internals is impossible. The first RPV «wet» annealing was done using nuclear heat (US Army SM-1A reactor). The second one was done by means of primary pumps heat (Belgian BR-3 reactor). As a rule, there is no recovery effect up to annealing and irradiation temperature difference of 70°C. It is known, however, that along with radiation embrittlement neutron irradiation may mitigate the radiation damage in metals. Therefore, we have tried to test the possibility to use the effect of radiation-induced ductilization in ‘wet’ annealing technology by means of nuclear heat utilization as heat and neutron irradiation sources at once. In support of the above-mentioned conception the 3-year duration reactor experiment on 15Cr3NiMoV type steel with preliminary irradiation at operating PWR at 270°C and following extra irradiation (87 h at 330°C) at IR-8 test reactor was fulfilled. In fact, embrittlement was partly suppressed up to value equivalent to 1,5 fold neutron fluence decrease. The degree of recovery in case of radiation enhanced annealing is equal to 27% whereas furnace annealing results in zero effect under existing conditions. Mechanism of the radiation-induced damage mitigation is proposed. It is hoped that «wet » annealing technology will help provide a better management of the RPV degradation as a factor affecting the lifetime of nuclear power plants which, together with associated management methods, will help facilitate safe and economic long-term operation of PWRs.Keywords: controlling, embrittlement, radiation, steel, wet annealing
Procedia PDF Downloads 380850 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points
Authors: Subir S. Rao
Abstract:
Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance
Procedia PDF Downloads 40849 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA
Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak
Abstract:
The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.Keywords: innovation, management, medicine, e-health, artificial intelligence
Procedia PDF Downloads 20848 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle
Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores
Abstract:
This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino
Procedia PDF Downloads 174847 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 89846 Nitrification and Denitrification Kinetic Parameters of a Mature Sanitary Landfill Leachate
Authors: Tânia F. C. V. Silva, Eloísa S. S. Vieira, João Pinto da Costa, Rui A. R. Boaventura, Vitor J. P. Vilar
Abstract:
Sanitary landfill leachates are characterized as a complex mixture of diverse organic and inorganic contaminants, which are usually removed by combining different treatment processes. Due to its simplicity, reliability, high cost-effectiveness and high nitrogen content (mostly under the ammonium form) inherent in this type of effluent, the activated sludge biological process is almost always applied in leachate treatment plants (LTPs). The purpose of this work is to assess the effect of the main nitrification and denitrification variables on the nitrogen's biological removal, from mature leachates. The leachate samples were collected after an aerated lagoon, at a LTP nearby Porto, presenting a high amount of dissolved organic carbon (1.0-1.3 g DOC/L) and ammonium nitrogen (1.1-1.7 g NH4+-N/L). The experiments were carried out in a 1-L lab-scale batch reactor, equipped with a pH, temperature and dissolved oxygen (DO) control system, in order to determine the reaction kinetic constants at unchanging conditions. The nitrification reaction rate was evaluated while varying the (i) operating temperature (15, 20, 25 and 30ºC), (ii) DO concentration interval (0.5-1.0, 1.0-2.0 and 2.0-4.0 mg/L) and (iii) solution pH (not controlled, 7.5-8.5 and 6.5-7.5). At the beginning of most assays, it was verified that the ammonium stripping occurred simultaneously to the nitrification, reaching up to 37% removal of total dissolved nitrogen. The denitrification kinetic constants and the methanol consumptions were calculated for different values of (i) volatile suspended solids (VSS) content (25, 50 and 100 mL of centrifuged sludge in 1 L solution), (ii) pH interval (6.5-7.0, 7.5-8.0 and 8.5-9.0) and (iii) temperature (15, 20, 25 and 30ºC), using effluent previously nitrified. The maximum nitrification rate obtained was 38±2 mg NH4+-N/h/g VSS (25ºC, 0.5-1.0 mg O2/L, pH not controlled), consuming 4.4±0.3 mg CaCO3/mg NH4+-N. The highest denitrification rate achieved was 19±1 mg (NO2--N+NO3--N)/h/g VSS (30ºC, 50 mL of sludge and pH between 7.5 and 8.0), with a C/N consumption ratio of 1.1±0.1 mg CH3OH/mg (NO2--N+NO3--N) and an overall alkalinity production of 3.7±0.3 mg CaCO3/mg (NO2--N+NO3--N). The denitrification process showed to be sensitive to all studied parameters, while the nitrification reaction did not suffered significant change when DO content was changed.Keywords: mature sanitary landfill leachate, nitrogen removal, nitrification and denitrification parameters, lab-scale activated sludge biological reactor
Procedia PDF Downloads 277845 Electrical Geophysical and Physiochemical Assessment of the Impact of Environmental Pollution on the Groundwater Potential of a Waste Land fill at Tudun Murtala in Nassarawa Local Government Area, Kano State, Nigeria
Authors: Abubakar Maitama Yusuf Hotoro, Olokpo Israel Olofu, Yusuf U. Tarauni, Mudassir A. Umar, Aliyu A, Dahiru Garba Diso, Usman H. Jamoh, M. Sale
Abstract:
The study assessed the impact of environmental pollution on groundwater potential at Tudun Murtala waste land fill using electrical resistivity, induced polarization and Physiochemical methods. The study area is located between latitude 12.023678N and longitude 8.573676 E. Geophysical data were collected at maximum length of 140m along twelve profiles using ABEM Terrameter SAS 1000. Results from the Geophysical analysis showed that the profiles were underlain by three lithological layers; the top layer consisting of Loamy and Sand soils, alluvium, granite, shale and sandstone. The second and third layers were predominantly made of weathered and fractured basements respectively. The potential groundwater water bearing zones of the study area occurred at VES2, VES4, VES5, VES6 and VES7. The thicknesses of the sounding points were found to be 20.8m at VES2; 25.2m at VES4; 13.2m at VES5; 50.8m at VES6 and 13.3m at VES7. The corresponding depths for the sounding points were 20.8m at VES2; 27.9m at VES4; 26.7m at VES5; 51.6m at VES6 and 24.9m at VES7 respectively. The Physiochemical study of selected groundwater samples assessed parameters such as the Electrical Conductivity, EC (288dS/m to 1365dS/m), TDS (170.8mg/L to 820mg/L) Pb (0.546mg/l to 0.629mg/l), Cu (-0.001mg/l to 0.004mg/l), and Cd (0.031mg/l to 0.092mg/l). The physiochemical results showed that the groundwater around the dumpsite may have been contaminated, especially in Dumpsite Hole 1 and Hole 2 at VES4 and VES6 respectively. There are indications for suspected leachate mitigation around the two VES points. Even though, the pH values of 6.4 and 6.2 at the two sounding points were considered to be within the permissible pH range (6.5 to 6.8). The values of other elements present in the groundwater for the samples at other VES points were found to be above permissible WHO and Nigerian Standards for Drinking Water.Keywords: resistivity induced polarization, chargeability, landfill, leachate, contamination
Procedia PDF Downloads 62844 Liquid Unloading of Wells with Scaled Perforation via Batch Foamers
Authors: Erwin Chan, Aravind Subramaniyan, Siti Abdullah Fatehah, Steve Lian Kuling
Abstract:
Foam assisted lift technology is proven across the industry to provide efficient deliquification in gas wells. Such deliquification is typically achieved by delivering the foamer chemical downhole via capillary strings. In highly liquid loaded wells where capillary strings are not readily available, foamer can be delivered via batch injection or bull-heading. The latter techniques differ from the former in that cap strings allow for liquid to be unloaded continuously, whereas foamer batches require that periodic batching be conducted for the liquid to be unloaded. Although batch injection allows for liquid to be unloaded in wells with suitable water to gas (WGR) ratio and condensate to gas (CGR) ratio without well intervention for capillary string installation, this technique comes with its own set of challenges - for foamer to de-liquify liquids, the chemical needs to reach perforation locations where gas bubbling is observed. In highly scaled perforation zones in certain wells, foamer delivered in batches is unable to reach the gas bubbling zone, thus achieving poor lift efficiency. This paper aims to discuss the techniques and challenges for unloading liquid via batch injection in scaled perforation wells X and Y, whose WGR is 6bbl/MMscf, whose scale build-up is observed at the bottom of perforation interval, whose water column is 400 feet, and whose ‘bubbling zone’ is less than 100 feet. Variables such as foamer Z dosage, batching technique, and well flow control valve opening times are manipulated during the duration of the trial to achieve maximum liquid unloading and gas rates. During the field trial, the team has found optimal values between the three aforementioned parameters for best unloading results, in which each cycle’s gas and liquid rates are compared with baselines with similar flowing tubing head pressures (FTHP). It is discovered that amongst other factors, a good agitation technique is a primary determinant for efficient liquid unloading. An average increment of 2MMscf/d against an average production of 4MMscf/d at stable FTHP is recorded during the trial.Keywords: foam, foamer, gas lift, liquid unloading, scale, batch injection
Procedia PDF Downloads 184843 Determination of the Structural Parameters of Calcium Phosphate for Biomedical Use
Authors: María Magdalena Méndez-González, Miguel García Rocha, Carlos Manuel Yermo De la Cruz
Abstract:
Calcium phosphate (Ca5(PO4)3(X)) is widely used in orthopedic applications and is widely used as powder and granules. However, their presence in bone is in the form of nanometric needles 60 nm in length with a non-stoichiometric phase of apatite contains CO3-2, Na+, OH-, F-, and other ions in a matrix of collagen fibers. The crystal size, morphology control and interaction with cells are essential for the development of nanotechnology. The structural results of calcium phosphate, synthesized by chemical precipitation with crystal size of 22.85 nm are presented in this paper. The calcium phosphate powders were analyzed by X-ray diffraction, energy dispersive spectroscopy (EDS), infrared spectroscopy and FT-IR transmission electron microscopy. Network parameters, atomic positions, the indexing of the planes and the calculation of FWHM (full width at half maximum) were obtained. The crystal size was also calculated using the Scherer equation d (hkl) = cλ/βcosѲ. Where c is a constant related to the shape of the crystal, the wavelength of the radiation used for a copper anode is 1.54060Å, Ѳ is the Bragg diffraction angle, and β is the width average peak height of greater intensity. Diffraction pattern corresponding to the calcium phosphate called hydroxyapatite phase of a hexagonal crystal system was obtained. It belongs to the space group P63m with lattice parameters a = 9.4394 Å and c = 6.8861 Å. The most intense peak is obtained 2Ѳ = 31.55 (FWHM = 0.4798), with a preferred orientation in 121. The intensity difference between the experimental data and the calculated values is attributable to the temperature at which the sintering was performed. The intensity of the highest peak is at angle 2Ѳ = 32.11. The structure of calcium phosphate obtained was a hexagonal configuration. The intensity changes in the peaks of the diffraction pattern, in the lattice parameters at the corners, indicating the possible presence of a dopant. That each calcium atom is surrounded by a tetrahedron of oxygen and hydrogen was observed by infrared spectra. The unit cell pattern corresponds to hydroxyapatite and transmission electron microscopic crystal morphology corresponding to the hexagonal phase with a preferential growth along the c-plane was obtained.Keywords: structure, nanoparticles, calcium phosphate, metallurgical and materials engineering
Procedia PDF Downloads 504842 The On-Board Critical Message Transmission Design for Navigation Satellite Delay/Disruption Tolerant Network
Authors: Ji-yang Yu, Dan Huang, Guo-ping Feng, Xin Li, Lu-yuan Wang
Abstract:
The navigation satellite network, especially the Beidou MEO Constellation, can relay data effectively with wide coverage and is applied in navigation, detection, and position widely. But the constellation has not been completed, and the amount of satellites on-board is not enough to cover the earth, which makes the data-relay disrupted or delayed in the transition process. The data-relay function needs to tolerant the delay or disruption in some extension, which make the Beidou MEO Constellation a delay/disruption-tolerant network (DTN). The traditional DTN designs mainly employ the relay table as the basic of data path schedule computing. But in practical application, especially in critical condition, such as the war-time or the infliction heavy losses on the constellation, parts of the nodes may become invalid, then the traditional DTN design could be useless. Furthermore, when transmitting the critical message in the navigation system, the maximum priority strategy is used, but the nodes still inquiry the relay table to design the path, which makes the delay more than minutes. Under this circumstances, it needs a function which could compute the optimum data path on-board in real-time according to the constellation states. The on-board critical message transmission design for navigation satellite delay/disruption-tolerant network (DTN) is proposed, according to the characteristics of navigation satellite network. With the real-time computation of parameters in the network link, the least-delay transition path is deduced to retransmit the critical message in urgent conditions. First, the DTN model for constellation is established based on the time-varying matrix (TVM) instead of the time-varying graph (TVG); then, the least transition delay data path is deduced with the parameters of the current node; at last, the critical message transits to the next best node. For the on-board real-time computing, the time delay and misjudges of constellation states in ground stations are eliminated, and the residual information channel for each node can be used flexibly. Compare with the minute’s delay of traditional DTN; the proposed transmits the critical message in seconds, which improves the re-transition efficiency. The hardware is implemented in FPGA based on the proposed model, and the tests prove the validity.Keywords: critical message, DTN, navigation satellite, on-board, real-time
Procedia PDF Downloads 343841 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 241840 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran
Authors: Fatemeh Faramarzi, Hosein Mahjoob
Abstract:
Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6
Procedia PDF Downloads 313839 The Effect of Foot Progression Angle on Human Lower Extremity
Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae
Abstract:
The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.Keywords: finite element analysis, gait analysis, human model, motion capture
Procedia PDF Downloads 336838 Bulk-Density and Lignocellulose Composition: Influence of Changing Lignocellulosic Composition on Bulk-Density during Anaerobic Digestion and Implication of Compacted Lignocellulose Bed on Mass Transfer
Authors: Aastha Paliwal, H. N. Chanakya, S. Dasappa
Abstract:
Lignocellulose, as an alternate feedstock for biogas production, has been an active area of research. However, lignocellulose poses a lot of operational difficulties- widespread variation in the structural organization of lignocellulosic matrix, amenability to degradation, low bulk density, to name a few. Amongst these, the low bulk density of the lignocellulosic feedstock is crucial to the process operation and optimization. Low bulk densities render the feedstock floating in conventional liquid/wet digesters. Low bulk densities also restrict the maximum achievable organic loading rate in the reactor, decreasing the power density of the reactor. However, during digestion, lignocellulose undergoes very high compaction (up to 26 times feeding density). This first reduces the achievable OLR (because of low feeding density) and compaction during digestion, then renders the reactor space underutilized and also imposes significant mass transfer limitations. The objective of this paper was to understand the effects of compacting lignocellulose on mass transfer and the influence of loss of different components on the bulk density and hence structural integrity of the digesting lignocellulosic feedstock. 10 different lignocellulosic feedstocks (monocots and dicots) were digested anaerobically in a fed-batch, leach bed reactor -solid-state stratified bed reactor (SSBR). Percolation rates of the recycled bio-digester liquid (BDL) were also measured during the reactor run period to understand the implication of compaction on mass transfer. After 95 ds, in a destructive sampling, lignocellulosic feedstocks digested at different SRT were investigated to quantitate the weekly changes in bulk density and lignocellulosic composition. Further, percolation rate data was also compared to bulk density data. Results from the study indicate loss of hemicellulose (r²=0.76), hot water extractives (r²=0.68), and oxalate extractives (r²=0.64) had dominant influence on changing the structural integrity of the studied lignocellulose during anaerobic digestion. Further, feeding bulk density of the lignocellulose can be maintained between 300-400kg/m³ to achieve higher OLR, and bulk density of 440-500kg/m³ incurs significant mass transfer limitation for high compacting beds of dicots.Keywords: anaerobic digestion, bulk density, feed compaction, lignocellulose, lignocellulosic matrix, cellulose, hemicellulose, lignin, extractives, mass transfer
Procedia PDF Downloads 168837 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 253836 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate
Procedia PDF Downloads 125835 Using Real Truck Tours Feedback for Address Geocoding Correction
Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle
Abstract:
When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.Keywords: driver experience feedback, geocoding correction, real truck tours
Procedia PDF Downloads 674834 Enhanced Performance of Supercapacitor Based on Boric Acid Doped Polyvinyl Alcohol-H₂SO₄ Gel Polymer Electrolyte System
Authors: Hamide Aydin, Banu Karaman, Ayhan Bozkurt, Umran Kurtan
Abstract:
Recently, Proton Conducting Gel Polymer Electrolytes (GPEs) have drawn much attention in supercapacitor applications due to their physical and electrochemical characteristics and stability conditions for low temperatures. In this research, PVA-H2SO4-H3BO3 GPE has been used for electric-double layer capacitor (EDLCs) application, in which electrospun free-standing carbon nanofibers are used as electrodes. Introduced PVA-H2SO4-H3BO3 GPE behaves as both separator and the electrolyte in the supercapacitor. Symmetric Swagelok cells including GPEs were assembled via using two electrode arrangements and the electrochemical properties were searched. Electrochemical performance studies demonstrated that PVA-H2SO4-H3BO3 GPE had a maximum specific capacitance (Cs) of 134 F g-1 and showed great capacitance retention (%100) after 1000 charge/discharge cycles. Furthermore, PVA-H2SO4-H3BO3 GPE yielded an energy density of 67 Wh kg-1 with a corresponding power density of 1000 W kg-1 at a current density of 1 A g-1. PVA-H2SO4 based polymer electrolyte was produced according to following procedure; Firstly, 1 g of commercial PVA was dissolved in distilled water at 90°C and stirred until getting transparent solution. This was followed by addition of the diluted H2SO4 (1 g of H2SO4 in a distilled water) to the solution to obtain PVA-H2SO4. PVA-H2SO4-H3BO3 based polymer electrolyte was produced by dissolving H3BO3 in hot distilled water and then inserted into the PVA-H2SO4 solution. The mole fraction was arranged to ¼ of the PVA repeating unit. After the stirring 2 h at RT, gel polymer electrolytes were obtained. The final electrolytes for supercapacitor testing included 20% of water in weight. Several blending combinations of PVA/H2SO4 and H3BO3 were studied to observe the optimized combination in terms of conductivity as well as electrolyte stability. As the amount of boric acid increased in the matrix, excess sulfuric acid was excluded due to cross linking, especially at lower solvent content. This resulted in the reduction of proton conductivity. Therefore, the mole fraction of H3BO3 was chosen as ¼ of PVA repeating unit. Within this optimized limits, the polymer electrolytes showed better conductivities as well as stability.Keywords: electrical double layer capacitor, energy density, gel polymer electrolyte, ultracapacitor
Procedia PDF Downloads 227833 Influence of Strike-Slip Faulting in the Tectonic Evolution of North-Eastern Tunisia
Authors: Aymen Arfaoui, Abdelkader Soumaya, Ali Kadri, Noureddine Ben Ayed
Abstract:
The major contractional events characterized by strike-slip faulting, folding, and thrusting occurred in the Eocene, Late Miocene, and Quaternary along with the NE Tunisian domain between Bou Kornine-Ressas- Msella and Cap Bon Peninsula. During the Plio-Quaternary, the Grombalia and Mornag grabens show a maximum of collapse in parallelism with the NNW-SSE SHmax direction and developed as 3rd order extensive regions within a regional compressional regime. Using available tectonic and geophysical data supplemented by new fault-kinematic observations, we show that Cenozoic deformations are dominated by first order N-S faults reactivation, this sinistral wrench system is responsible for the formation of strike-slip duplexes, thrusts, folds, and grabens. Based on our new structural interpretation, the major faults of N-S Axis, Bou Kornine-Ressas-Messella (MRB), and Hammamet-Korbous (HK) form an N-S first order restraining stepover within a left-lateral strike-slip duplex. The N-S master MRB fault is dominated by contractional imbricate fans, while the parallel HK fault is characterized by a trailing of extensional imbricate fans. The Eocene and Miocene compression phases in the study area caused sinistral strike-slip reactivation of pre-existing N-S faults, reverse reactivation of NE-SW trending faults, and normal-oblique reactivation of NW-SE faults, creating a NE-SW to N-S trending system of east-verging folds and overlaps. Seismic tomography images reveal a key role for the lithospheric subvertical tear or STEP fault (Slab Transfer Edge Propagator) evidenced below this region on the development of the MRB and the HK relay zone. The presence of extensive syntectonic Pliocene sequences above this crustal scale fault may be the result of a recent lithospheric vertical motion of this STEP fault due to the rollback and lateral migration of the Calabrian slab eastward.Keywords: Tunisia, strike-slip fault, contractional duplex, tectonic stress, restraining stepover, STEP fault
Procedia PDF Downloads 131832 Experimental Analysis on Heat Transfer Enhancement in Double Pipe Heat Exchanger Using Al2O3/Water Nanofluid and Baffled Twisted Tape Inserts
Authors: Ratheesh Radhakrishnan, P. C. Sreekumar, K. Krishnamoorthy
Abstract:
Heat transfer augmentation techniques ultimately results in the reduction of thermal resistance in a conventional heat exchanger by generating higher convective heat transfer coefficient. It also results in reduction of size, increase in heat duty, decrease in approach temperature difference and reduction in pumping power requirements for heat exchangers. Present study deals with compound augmentation technique, which is not widely used. The study deals with the use of Alumina (Al2O3)/water nanofluid and baffled twisted tape inserts in double pipe heat exchanger as compound augmentation technique. Experiments were conducted to evaluate the heat transfer coefficient and friction factor for the flow through the inner tube of heat exchanger in turbulent flow range (8000831 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology
Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi
Abstract:
The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.Keywords: emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method
Procedia PDF Downloads 255830 Insecticidal Effect of a Botanical Plant Extracts (Ultra Act®) on Bactrocera oleae (Diptera:Tephritidae) Preimaginal Development and Pupa Survival
Authors: Imen Blibech, Mohieddine Ksantini, Manohar Shete
Abstract:
Bactrocera oleae is one of the most economically damaging insects of olive in Tunisia and other producing countries of olive trees. As a reliable alternative to synthetic chemical insecticides, botanical insecticides are considered natural control methods safe for the environment and human health. The certified botanical insecticide ULTRA-ACT® effectively on large scale of insects is approved per Indian and International organic standards certified organic pesticides. Olives with signs of olive fly infestation were collected from productive olive trees in three Sahel localities of Tunisia. Infested fruits were separated daily for larval stage control purposes, into new rearing boxes under microclimatic conditions at 75% R.H, 25 ± 3°C and 8 L-16D. Treatment with ULTRA-ACT® extract solutions was made by dipping methods; each fruit was pipetted in 5 mL of extract for 10 seconds then air- dried. Five doses of ULTRA-ACT® were used for a bioassay, plus a water-only control. A total of 200 infested olive fruits were treated in separate dishes with a proportion of 10 olives per dish. A total of 20 dishes were used for each concentration treatment as well as 20 dished utilized as control. The bioassay was conducted with 3 replicates. The development of the larval and pupal stages was recorded since the egg hatching until emergence of adults. It was determined that ULTRA-ACT® extracts on succeeding concentrations; 0.25, 0.5, 1 and 2% show significant effect on the biology of the pest. Increased concentration decreased significantly adult emergence from pupae and affect the egg hatchability percentage. Therefore, larval mortality increased insignificantly with the increase of the product concentration. The 2nd instar larvae were more susceptible to the product and after 72 hours the maximum mortality (75%) was observed with ULTRA-ACT® 2%. The present work aimed to give a possible and efficient alternative solution for B. oleae biological control with a promising botanical insecticide.Keywords: Bactrocera oleae, olive insect pest, Ultra Act®, larval mortality, pupal emergency, biological control
Procedia PDF Downloads 134