Search results for: optimal scheme
270 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 99269 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 342268 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 103267 Participatory Action Research for Sustainability with Special Focus on Student Initiatives
Authors: Soni T. L.
Abstract:
Sustainable environmental stress is a major concern which needs immediate attention. This paper is an attempt to present participatory action research for sustainable agriculture. Being first and best culture, agriculture protects and improves the natural environment, the social and economic conditions of people, and safeguards the health and welfare of all groups. During course of time agriculture turned to agribusiness, then the values are not safeguarded. Moreover, in today’s busy life many are not taking efforts to take part in agriculture production. Then children are not getting the opportunity to understand agriculture and farming practices. So student initiatives are vital to make them aware. Here the programmes structured by the researcher come under the auspicious of National Service Scheme, a student-centered educational programme, organized by Ministry of Youth Affairs, Government of India. The twin objectives of the study are to examine the role of student initiatives for sustainable agriculture and the role of participatory action research in student initiatives. SWOT analysis is made to study strengths, weaknesses, threats and opportunities. The Methodology adopted is Participatory Action Research. The method is participatory in a sense there is collaboration through participation. The method is action, there is lab land experiences which is real. The method is research that there is documented lessons and creation of new knowledge. Plan of action cover measures adopted and strategies taken i.e., bhavana – kalpana – yojana – sadhana. Through the team effort, the team was successful in converting more than 10 hectares of barren land into cultivable land within and outside the campus. Team efforts of students saved a huge amount of labour cost and produced a huge quantity of organic output and the team was also successful in creating 1000 rain pits in the premises of College for rainwater harvesting. The findings include conveyance of the Message: Food Production is superior to Food donation. Moreover, the study fostered good work ethic and social responsibility among students. Students undertake innovative programmes underlying social and environmental issues and participants got increased opportunities to interact with local and less privileged and acquired increased awareness about real-life experiences which make them confident to interact with people and it resulted in the strengthening of social capital- cooperation, team spirit, social commitment among students. Participants promoted sustainable domestic efforts and ultimately environmental protection is ensured. Finally, there is recognition to the team, institution and the researcher at the university level, state level and at the national level. The learned lessons are, if the approach is good, the response is good and success generates success. Participatory action research is empowering experience for practitioners, focusing the combined time, energy and creativity of a committed group we should lead so many programmes which makes the institution centre of excellence. Authorities should take necessary steps for the Inclusion of community development activities in the curriculum. Action research is problem, client and action centered. So, we must adapt and adopt, coordinates and correlates measures which preserve and conserve the environment.Keywords: participatory action research, student initiatives, sustainable development, sustainability
Procedia PDF Downloads 156266 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 388265 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 283264 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 62263 Predictors for Success in Methadone Maintenance Treatment Clinic: 24 Years of Experience
Authors: Einat E. Peles, Shaul Schreiber, Miriam Adelson
Abstract:
Background: Since established more than 50 years ago, methadone maintenance treatment (MMT) is the most effective treatment for opioid addiction, a chronic relapsing brain disorder that became an epidemic in western societies. Treatment includes daily individual optimal medication methadone dose (a long acting mu opioid receptor full agonist), accompanied with psychosocial therapy. It is well established that the longer retention in treatment the better outcome and survival occur. It reduces the likelihood to infectious diseases and overdose death that associated with drug injecting, enhanced social rehabilitation and eliminate criminal activity, and lead to healthy productive life. Aim: To evaluate predictors for long term retention in treatment we analyzed our prospective follow up of a major MMT clinic affiliated to a big tertiary medical center. Population Methods: Between June 25, 1993, and June 24, 2016, all 889 patients ( ≥ 18y) who ever admitted to the clinic were prospectively followed-up until May 2017. Duration in treatment from the first admission until the patient quit treatment or until the end of follow-up (24 years) was taken for calculating cumulative retention in treatment using survival analyses (Kaplan Meier) with log-rank and Cox regression for multivariate analyses. Results: Of the 889 patients, 25.2% were females who admitted to treatment at younger age (35.0 ± 7.9 vs. 40.6 ± 9.8, p < .0005), but started opioid usage at same age (22.3 ± 6.9). In addition to opioid use, on admission to MMT 58.5% had positive urine for benzodiazepines, 25% to cocaine, 12.4% to cannabis and 6.9% to amphetamines. Hepatitis C antibody tested positive in 55%, and HIV in 7.8% of the patients and 40%. Of all patients, 75.7% stayed at least one year in treatment, and of them, 67.7% stopped opioid usage (based on urine tests), and a net reduction observed in all other substance abuse (proportion of those who stopped minus proportion of those who have started). Long term retention up to 24 years was 8.0 years (95% Confidence Interval (CI) 7.4-8.6). Predictors for longer retention in treatment (Cox regression) were being older on admission ( ≥ 30y) Odds Ratio (OR) =1.4 (CI 1.1-1.8), not abusing opioids after one year OR=1.8 (CI 1.5-2.1), not abusing benzodiazepine after one year OR=1.7 (CI 1.4-2.1) and treating with methadone dose ≥ 100mg/day OR =1.8 (CI 1.5-2.3). Conclusions: Treating and following patients over 24 years indicate success of two main outcomes, high rate of retention after one year (75.7%) and high proportion of opiate abuse cessation (67.7%). As expected, longer cumulative retention was associated with patients treated with high adequate methadone dose that successfully result in opioid cessation. Based on these findings, in order to reduce morbidity and mortality, we find the establishment of more MMT clinics within a general hospital, a most urgent necessity.Keywords: methadone maintenance treatment, epidemic, opioids, retention
Procedia PDF Downloads 192262 Story Telling Method as a Bastion of Local Wisdom in the Frame of Education Technology Development in Medan, North Sumatra-Indonesia
Authors: Mardianto
Abstract:
Education and learning are now grown rapidly. Synergy of techonology especially instructional technology in the learning activities are very big influence on the effectiveness of learning and creativity to achieve optimal results. But on the other hand there is a education value that is difficult to be articulated through character-forming technology such as honesty, discipline, hard work, heroism, and so forth. Learning strategy and storytelling from the past until today is still an option for teachers to convey the message of character values. With the material was loaded from the local culture (stories folklore), the combination of learning objectives (build character child) strategy, and traditional methods (storytelling and story), and the preservation of local culture (dig tale folklore) is critical to maintaining the nation's culture. In the context of maintaining the nation's culture, then since the age of the child at the level of government elementary school a necessity. Globalization, the internet and technology sometimes feel can displace the role of the teacher in the learning activities. To the oral tradition is a mainstay of storytelling should be maintained and preserved. This research was conducted at the elementary school in the city of Medan, North Sumatra Indonesia, with a random sampling technique, the 27 class teachers were respondents who were randomly assigned to the Madrasah Ibtdaiyah (Islamic Elementary School) both public and private. Research conducted at the beginning of 2014 refers to a curriculum that is being transformed in the environment ministry Republic Religion Indonesia. The results of this study indicate that; the declining skills of teachers to develop storytelling this can be seen from; 74.07% of teachers have never attended a special training storytelling, 85.19% no longer nasakah new stories, only 22.22% are teachers who incorporate methods of stories in the learning plan. Most teachers are no longer concerned with storytelling, among those experiencing difficulty in developing methods because the story; 66.67% of children are more interested in children's cartoons like Bobo boy, Angrybirds and others, 59.26 children prefer other activities than listening to a story. The teachers hope, folklore books should be preserved, storytelling training should be provided by the government through the ministry of religion, race or competition of storytelling should be scheduled, writing a new script-based populist storytelling should be provided immediately. The teachers’ hope certainly not excessive, by realizing the story method becomes articulation as the efforts of child character development based populist, therefore the local knowledge can be a strong fortress facing society in the era of progress as at present, and future.Keywords: story telling, local wisdom, education, technology development
Procedia PDF Downloads 278261 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients
Authors: Boguslaw Schreyer
Abstract:
During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.Keywords: wheeled robots, braking, traction coefficient, asymmetric
Procedia PDF Downloads 165260 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 53259 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 56258 A Delphi Study to Build Consensus for Tuberculosis Control Guideline to Achieve Who End Tb 2035 Strategy
Authors: Pui Hong Chung, Cyrus Leung, Jun Li, Kin On Kwok, Ek Yeoh
Abstract:
Introduction: Studies for TB control in intermediate tuberculosis burden countries (IBCs) comprise a relatively small proportion in TB control literature, as compared to the effort put in high and low burden counterparts. It currently lacks of consensus in the optimal weapons and strategies we can use to combat TB in IBCs; guidelines of TB control are inadequate and thus posing a great obstacle in eliminating TB in these countries. To fill-in the research and services gap, we need to summarize the findings of the effort in this regard and to seek consensus in terms of policy making for TB control, we have devised a series of scoping and Delphi studies for these purposes. Method: The scoping and Delphi studies are conducted in parallel to feed information for each other. Before the Delphi iterations, we have invited three local experts in TB control in Hong Kong to participate in the pre-assessment round of the Delphi study to comments on the validity, relevance, and clarity of the Delphi questionnaire. Result: Two scoping studies, regarding LTBI control in health care workers in IBCs and TB control in elderly of IBCs respectively, have been conducted. The result of these two studies is used as the foundation for developing the Delphi questionnaire, which tapped on seven areas of question, namely: characteristics of IBCs, adequacy of research and services in LTBI control in IBCs, importance and feasibility of interventions for TB control and prevention in hospital, screening and treatment of LTBI in community, reasons of refusal to/ default from LTBI treatment, medical adherence of LTBI treatment, and importance and feasibility of interventions for TB control and prevention in elderly in IBCs. The local experts also commented on the two scoping studies conducted, thus act as the sixth phase of expert consultation in Arksey and O’Malley framework of scoping studies, to either nourish the scope and strategies used in these studies or to supplement ideas for further scoping or systematic review studies. In the subsequent stage, an international expert panel, comprised of 15 to 20 experts from IBCs in Western Pacific Region, will be recruited to join the two-round anonymous Delphi iterations. Four categories of TB control experts, namely clinicians, policy makers, microbiologists/ laboratory personnel, and public health clinicians will be our target groups. A consensus level of 80% is used to determine the achievement of consensus on particular issues. Key messages: 1. Scoping review and Delphi method are useful to identify gaps and then achieve consensus in research. 2. Lots of resources are put in the high burden countries now. However, the usually neglected intermediate-burden countries with TB is an indispensable part for achieving the ambitious WHO End TB 2035 target.Keywords: dephi questionnaire, tuberculosis, WHO, latent TB infection
Procedia PDF Downloads 301257 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode
Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno
Abstract:
Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity
Procedia PDF Downloads 193256 Application of Nanoparticles on Surface of Commercial Carbon-Based Adsorbent for Removal of Contaminants from Water
Authors: Ahmad Kayvani Fard, Gordon Mckay, Muataz Hussien
Abstract:
Adsorption/sorption is believed to be one of the optimal processes for the removal of heavy metals from water due to its low operational and capital cost as well as its high removal efficiency. Different materials have been reported in literature as adsorbent for heavy metal removal in waste water such as natural sorbents, organic polymers (synthetic) and mineral materials (inorganic). The selection of adsorbents and development of new functional materials that can achieve good removal of heavy metals from water is an important practice and depends on many factors, such as the availability of the material, cost of material, and material safety and etc. In this study we reported the synthesis of doped Activated carbon and Carbon nanotube (CNT) with different loading of metal oxide nanoparticles such as Fe2O3, Fe3O4, Al2O3, TiO2, SiO2 and Ag nanoparticles and their application in removal of heavy metals, hydrocarbon, and organics from waste water. Commercial AC and CNT with different loadings of mentioned nanoparticle were prepared and effect of pH, adsorbent dosage, sorption kinetic, and concentration effects are studied and optimum condition for removal of heavy metals from water is reported. The prepared composite sorbent is characterized using field emission scanning electron microscopy (FE-SEM), high transmission electron microscopy (HR-TEM), thermogravimetric analysis (TGA), X-ray diffractometer (XRD), the Brunauer, Emmett and Teller (BET) nitrogen adsorption technique, and Zeta potential. The composite materials showed higher removal efficiency and superior adsorption capacity compared to commercially available carbon based adsorbent. The specific surface area of AC increased by 50% reaching up to 2000 m2/g while the CNT specific surface area of CNT increased by more than 8 times reaching value of 890 m2/g. The increased surface area is one of the key parameters along with surface charge of the material determining the removal efficiency and removal efficiency. Moreover, the surface charge density of the impregnated CNT and AC have enhanced significantly where can benefit the adsorption process. The nanoparticles also enhance the catalytic activity of material and reduce the agglomeration and aggregation of material which provides more active site for adsorbing the contaminant from water. Some of the results for treating wastewater includes 100% removal of BTEX, arsenic, strontium, barium, phenolic compounds, and oil from water. The results obtained are promising for the use of AC and CNT loaded with metal oxide nanoparticle in treatment and pretreatment of waste water and produced water before desalination process. Adsorption can be very efficient with low energy consumption and economic feasibility.Keywords: carbon nanotube, activated carbon, adsorption, heavy metal, water treatment
Procedia PDF Downloads 234255 Effect of Plant Growth Regulators on in vitro Biosynthesis of Antioxidative Compounds in Callus Culture and Regenerated Plantlets Derived from Taraxacum officinale
Authors: Neha Sahu, Awantika Singh, Brijesh Kumar, K. R. Arya
Abstract:
Taraxacum officinale Weber or dandelion (Asteraceae) is an important Indian traditional herb used to treat liver detoxification, digestive problems, spleen, hepatic and kidney disorders, etc. The plant is well known to possess important phenolic and flavonoids to serve as a potential source of antioxidative and chemoprotective agents. Biosynthesis of bioactive compounds through in vitro cultures is a requisite for natural resource conservation and to provide an alternative source for pharmaceutical applications. Thus an efficient and reproducible protocol was developed for in vitro biosynthesis of bioactive antioxidative compounds from leaf derived callus and in vitro regenerated cultures of Taraxacum officinale using MS media fortified with various combinations of auxins and cytokinins. MS media containing 0.25 mg/l 2, 4-D (2, 4-Dichloro phenoxyacetic acid) with 0.05 mg/l 2-iP [N6-(2-Isopentenyl adenine)] was found as an effective combination for the establishment of callus with 92 % callus induction frequency. Moreover, 2.5 mg/l NAA (α-Naphthalene acetic acid) with 0.5 mg/l BAP (6-Benzyl aminopurine) and 1.5 mg/l NAA showed the optimal response for in vitro plant regeneration with 80 % regeneration frequency and rooting respectively. In vitro regenerated plantlets were further transferred to soil and acclimatized. Quantitative variability of accumulated bioactive compounds in cultures (in vitro callus, plantlets and acclimatized) were determined through UPLC-MS/MS (ultra-performance liquid chromatography-triple quadrupole-linear ion trap mass spectrometry) and compared with wild plants. The phytochemical determination of in vitro and wild grown samples showed the accumulation of 6 compounds. In in vitro callus cultures and regenerated plantlets, two major antioxidative compounds i.e. chlorogenic acid (14950.0 µg/g and 4086.67 µg/g) and umbelliferone (10400.00 µg/g and 2541.67 µg/g) were found respectively. Scopoletin was found to be highest in vitro regenerated plants (83.11 µg/g) as compared to wild plants (52.75 µg/g). Notably, scopoletin is not detected in callus and acclimatized plants, but quinic acid (6433.33 µg/g) and protocatechuic acid (92.33 µg/g) were accumulated at the highest level in acclimatized plants as compared to other samples. Wild grown plants contained highest content (948.33 µg/g) of flavonoid glycoside i.e. luteolin-7-O-glucoside. Our data suggests that in vitro callus and regenerated plants biosynthesized higher content of antioxidative compounds in controlled conditions when compared to wild grown plants. These standardized cultural conditions may be explored as a sustainable source of plant materials for enhanced production and adequate supply of oxidative polyphenols.Keywords: anti-oxidative compounds, in vitro cultures, Taraxacum officinale, UPLC-MS/MS
Procedia PDF Downloads 201254 Investigation of Mass Transfer for RPB Distillation at High Pressure
Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock
Abstract:
In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed
Procedia PDF Downloads 54253 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions
Authors: Guneet Saini, Shahrukh, Sunil Sharma
Abstract:
Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.Keywords: operational performance, roundabout, simulation, VISSIM
Procedia PDF Downloads 139252 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory
Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra
Abstract:
Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale
Procedia PDF Downloads 140251 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy
Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos
Abstract:
Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.Keywords: decision making, cognitive load, financial literacy, numeracy
Procedia PDF Downloads 182250 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque
Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya
Abstract:
The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot
Procedia PDF Downloads 244249 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 647248 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 241247 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 190246 Anthelmintic Property of Pomegranate Peel Aqueous Extraction Against Ascaris Suum: An In-vitro Analysis
Authors: Edison Ramos, John Peter V. Dacanay, Milwida Josefa Villanueva
Abstract:
Soil-Transmitted Helminth (STH) infections caused by helminths are the most prevalent neglected tropical diseases (NTDs). They are commonly found in warm, humid regions and developing countries, particularly in rural areas with poor hygiene. Occasionally, human hosts exposed to pig manure may harbor Ascaris suum parasites without experiencing any symptoms. To address the significant issue of helminth infections, an effective anthelmintic is necessary. However, the effectiveness of various medications as anthelmintics can be reduced due to mutations. In recent years, there has been a growing interest in using plants as a source of medicine due to their natural origin, accessibility, affordability, and potential lack of complications. Herbal medicine has been advocated as an alternative treatment for helminth infections, especially in underdeveloped countries, considering the numerous adverse effects and drug resistance associated with commercially available anthelmintics. Medicinal plants are considered suitable replacements for current anthelmintics due to their historical usage in treating helminth infections. The objective of this research was to investigate the effects of aqueous extraction of pomegranate peel (Punica granatum L.) as an anthelmintic on female Ascaris suum in vitro. The in vitro assay involved observing the motility of Ascaris suum in different concentrations (25%, 50%, 75%, and 100%) of pomegranate peel aqueous extraction, along with mebendazole as a positive control. The results indicated that as the concentration of the extract increased, the time required to paralyze the worms decreased. At 25% concentration, the average time for paralysis was 362.0 minutes, which decreased to 181.0 minutes at 50% concentration, 122.7 minutes at 75% concentration, and 90.0 minutes at 100% concentration. The time of death for the worms was directly proportional to the concentration of the pomegranate peel extract. Death was observed at an average time of 240.7 minutes at 75% concentration and 147.7 minutes at 100% concentration. The findings suggest that as the concentration of pomegranate peel extract increases, the time required for paralysis and death of Ascaris suum decreases. This indicates a concentration-dependent relationship, where higher concentrations of the extract exhibit greater effectiveness in inducing paralysis and causing the death of the worms. These results emphasize the potential anthelmintic properties of pomegranate peel extract and its ability to effectively combat Ascaris suum infestations. There was no significant difference in the anthelmintic effectiveness between the pomegranate peel extract and Mebendazole. These findings highlight the potential of pomegranate peel extract as an alternative anthelmintic treatment for Ascaris suum infections. The researchers recommend determining the optimal dose and administration route to maximize the effectiveness of pomegranate peel as an anthelmintic therapeutic against Ascaris suum.Keywords: pomegranate peel, aqueous extract, anthelmintic, in vitro
Procedia PDF Downloads 114245 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.Keywords: anammox, filter media, kinetics, nitrogen removal
Procedia PDF Downloads 382244 Molecular Docking Analysis of Flavonoids Reveal Potential of Eriodictyol for Breast Cancer Treatment
Authors: Nicole C. Valdez, Vincent L. Borromeo, Conrad C. Chong, Ahmad F. Mazahery
Abstract:
Breast cancer is the most prevalent cancer worldwide, where the majority of cases are estrogen-receptor positive and involve 2 receptor proteins. The binding of estrogen to estrogen receptor alpha (ERα) promotes breast cancer growth, while it's binding to estrogen-receptor beta (ERβ) inhibits tumor growth. While natural products have been a promising source of chemotherapeutic agents, the challenge remains in finding a bioactive compound that specifically targets cancer cells, minimizing side effects on normal cells. Flavonoids are natural products that act as phytoestrogens and induce the same response as estrogen. They are able to compete with estrogen for binding to ERα; however, it has a higher binding affinity for ERβ. Their abundance in nature and low toxicity make them a potential candidate for breast cancer treatment. This study aimed to determine which particular flavonoids can specifically recognize ERβ and potentially be used for breast cancer treatment through molecular docking. A total of 206 flavonoids comprised of 97 isoflavones and 109 flavanones were collected from ZINC15, while the 3D structures of ERβ and ERα were obtained from Protein Data Bank. These flavonoid subclasses were chosen as they bind more strongly to ERs due to their chemical structure. The structures of the flavonoid ligands were converted using Open Babel, while the estrogen receptor protein structures were prepared using Autodock MGL Tools. The optimal binding site was found using BIOVIA Discovery Studio Visualizer before docking all flavonoids on both ERβ and ERα through Autodock Vina. Genistein is a flavonoid that exhibits anticancer effects by binding to ERβ, so its binding affinity was used as a baseline. Eriodictyol and 4”,6”-Di-O-Galloylprunin both exceeded genistein’s binding affinity for ERβ and was lower than its binding affinity for ERα. Of the two, eriodictyol was pursued due to its antitumor properties on a lung cancer cell line and on glioma cells. It is able to arrest the cell cycle at the G2/M phase by inhibiting the mTOR/PI3k/Akt cascade and is able to induce apoptosis via the PI3K/Akt/NF-kB pathway. Protein pathway and gene analysis were also conducted using ChEMBL and PANTHER and it was shown that eriodictyol might induce anticancer effects through the ROS1, CA7, KMO, and KDM1A genes which are involved in cell proliferation in breast cancer, non-small cell lung cancer, and other diseases. The high binding affinity of eriodictyol to ERβ, as well as its potential affected genes and antitumor effects, therefore, make it a candidate for the development of new breast cancer treatment. Verification through in vitro experiments such as checking the upregulation and downregulation of genes through qPCR and checking cell cycle arrest using a flow cytometry assay is recommended.Keywords: breast cancer, estrogen receptor, flavonoid, molecular docking
Procedia PDF Downloads 89243 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 149242 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164241 Benefits of Shaping a Balance on Environmental and Economic Sustainability for Population Health
Authors: Edna Negron-Martinez
Abstract:
Our time's global challenges and trends —like those associated with climate change, demographics displacements, growing health inequalities, and increasing burden of diseases— have complex connections to the determinants of health. Information on the burden of disease causes and prevention is fundamental for public health actions, like preparedness and responses for disasters, and recovery resources after the event. For instance, there is an increasing consensus about key findings of the effects and connections of the global burden of disease, as it generates substantial healthcare costs, consumes essential resources and prevents the attainment of optimal health and well-being. The goal of this research endeavor is to promote a comprehensive understanding of the connections between social, environmental, and economic influences on health. These connections are illustrated by pulling from clearly the core curriculum of multidisciplinary areas —as urban design, energy, housing, and economy— as well as in the health system itself. A systematic review of primary and secondary data included a variety of issues as global health, natural disasters, and critical pollution impacts on people's health and the ecosystems. Environmental health is challenged by the unsustainable consumption patterns and the resulting contaminants that abound in many cities and urban settings around the world. Poverty, inadequate housing, and poor health are usually linked. The house is a primary environmental health context for any individual and especially for more vulnerable groups; such as children, older adults and those who are sick. Nevertheless, very few countries show strong decoupling of environmental degradation from economic growth, as indicated by a recent 2017 Report of the World Bank. Worth noting, the environmental fraction of the global burden of disease in a 2016 World Health Organization (WHO) report estimated that 12.6 million global deaths, accounting for 23% (95% CI: 13-34%) of all deaths were attributable to the environment. Among the environmental contaminants include heavy metals, noise pollution, light pollution, and urban sprawl. Those key findings make a call to the significance to urgently adopt in a global scale the United Nations post-2015 Sustainable Development Goals (SDGs). The SDGs address the social, environmental, and economic factors that influence health and health inequalities, advising how these sectors, in turn, benefit from a healthy population. Consequently, more actions are necessary from an inter-sectoral and systemic paradigm to enforce an integrated sustainability policy implementation aimed at the environmental, social, and economic determinants of health.Keywords: building capacity for workforce development, ecological and environmental health effects of pollution, public health education, sustainability
Procedia PDF Downloads 107