Search results for: friction surfaces of airport emergency plan
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4520

Search results for: friction surfaces of airport emergency plan

830 Cost Analysis of Neglected Tropical Disease in Nigeria: Implication for Programme Control and Elimination

Authors: Lawong Damian Bernsah

Abstract:

Neglected Tropical Diseases (NTDs) are most predominant among the poor and rural populations and are endemic in 149 countries. These diseases are the most prevalent and responsible for infecting 1.4 billion people worldwide. There are 17 neglected tropical diseases recognized by WHO that constitute the fourth largest disease health and economic burden of all communicable diseases. Five of these 17 diseases are considered for the cost analysis of this paper: lymphatic filariasis, onchocerciasis, trachoma, schistosomiasis, and soil transmitted helminth infections. WHO has proposed a roadmap for eradication and elimination by 2020 and treatments have been donated through the London Declaration by pharmaceutical manufacturers. The paper estimates the cost of NTD control programme and elimination for each NTD disease and total in Nigeria. This is necessary as it forms the bases upon which programme budget and expenditure could be based. Again, given the opportunity cost the resources for NTD face it is necessary to estimate the cost so as to provide bases for comparison. Cost of NTDs control and elimination programme is estimated using the population at risk for each NTD diseases and for the total. The population at risk is gotten from the national master plan for the 2015 - 2020, while the cost per person was gotten for similar studies conducted in similar settings and ranges from US$0.1 to US$0.5 for Mass Administration of Medicine (MAM) and between US$1 to US$1.5 for each NTD disease. The combined cost for all the NTDs was estimated to be US$634.88 million for the period 2015-2020 and US$1.9 billion for each NTD disease for the same period. For the purpose of sensitivity analysis and for robustness of the analysis the cost per person was varied and all were still high. Given that health expenditure for Nigeria (% of GDP) averages 3.5% for the period 1995-2014, it is very clear that efforts have to be made to improve allocation to the health sector in general which is hoped could trickle to NTDs control and elimination. Thus, the government and the donor partners would need to step-up budgetary allocation and also to be aware of the costs of NTD control and elimination programme since they have alternative uses. Key Words: Neglected Tropical Disease, Cost Analysis, NTD Programme Control and Elimination, Cost per Person

Keywords: Neglected Tropical Disease, Cost Analysis, Neglected Tropical Disease Programme Control and Elimination, Cost per Person

Procedia PDF Downloads 251
829 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 363
828 Three Types of Mud-Huts with Courtyards in Composite Climate: Thermal Performance in Summer and Winter

Authors: Janmejoy Gupta, Arnab Paul, Manjari Chakraborty

Abstract:

Jharkhand is a state located in the eastern part of India. The Tropic of Cancer (23.5 degree North latitude line) passes through Ranchi district in Jharkhand. Mud huts with burnt clay tiled roofs in Jharkhand are an integral component of the state’s vernacular architecture. They come in various shapes, with a number of them having a courtyard type of plan. In general, it has been stated that designing dwellings with courtyards in them is a climate-responsive strategy in composite climate. The truth behind this hypothesis is investigated in this paper. In this paper, three types of mud huts with courtyards situated in Ranchi district in Jharkhand are taken as a study and through temperature measurements in the south-side rooms and courtyards, in addition to Autodesk Ecotect (Version 2011) software simulations, their thermal performance throughout the year are observed. Temperature measurements are specifically taken during the peak of summer and winter and the average temperatures in the rooms and courtyards during seven day-periods in peak of summer and peak of winter are plotted graphically. Thereafter, on the basis of the study and software simulations, the hypothesis is verified and the thermally better performing dwelling types in summer and winter identified among the three sub-types studied. Certain recommendations with respect to increasing thermal comfort in courtyard type mud huts in general are also made. It is found that all courtyard type dwellings do not necessarily show better thermal performance in summer and winter in composite climate. The U shaped dwelling with open courtyard on southern side offers maximum amount of thermal-comfort inside the rooms in the hotter part of the year and the square hut with a central courtyard, with the courtyard being closed from all sides, shows superior thermal performance in winter. The courtyards in all the three case-studies are found to get excessively heated up during summer.

Keywords: courtyard, mud huts, simulations, temperature measurements, thermal performance

Procedia PDF Downloads 379
827 Utility of Thromboelastography Derived Maximum Amplitude and R-Time (MA-R) Ratio as a Predictor of Mortality in Trauma Patients

Authors: Arulselvi Subramanian, Albert Venencia, Sanjeev Bhoi

Abstract:

Coagulopathy of trauma is an early endogenous coagulation abnormality that occurs shortly resulting in high mortality. In emergency trauma situations, viscoelastic tests may be better in identifying the various phenotypes of coagulopathy and demonstrate the contribution of platelet function to coagulation. We aimed to determine thrombin generation and clot strength, by estimating a ratio of Maximum amplitude and R-time (MA-R ratio) for identifying trauma coagulopathy and predicting subsequent mortality. Methods: We conducted a prospective cohort analysis of acutely injured trauma patients of the adult age groups (18- 50 years), admitted within 24hrs of injury, for one year at a Level I trauma center and followed up on 3rd day and 5th day of injury. Patients with h/o coagulation abnormalities, liver disease, renal impairment, with h/o intake of drugs were excluded. Thromboelastography was done and a ratio was calculated by dividing the MA by the R-time (MA-R). Patients were further stratified into sub groups based on the calculated MA-R quartiles. First sampling was done within 24 hours of injury; follow up on 3rd and 5thday of injury. Mortality was the primary outcome. Results: 100 acutely injured patients [average, 36.6±14.3 years; 94% male; injury severity score 12.2(9-32)] were included in the study. Median (min-max) on admission MA-R ratio was 15.01(0.4-88.4) which declined 11.7(2.2-61.8) on day three and slightly rose on day 5 13.1(0.06-68). There were no significant differences between sub groups in regard to age, or gender. In the lowest MA-R ratios subgroup; MA-R1 (<8.90; n = 27), injury severity score was significantly elevated. MA-R2 (8.91-15.0; n = 23), MA-R3 (15.01-19.30; n = 24) and MA-R4 (>19.3; n = 26) had no difference between their admission laboratory investigations, however slight decline was observed in hemoglobin, red blood cell count and platelet counts compared to the other subgroups. Also significantly prolonged R time, shortened alpha angle and MA were seen in MA-R1. Elevated incidence of mortality also significantly correlated with on admission low MA-R ratios (p 0.003). Temporal changes in the MA-R ratio did not correlated with mortality. Conclusion: The MA-R ratio provides a snapshot of early clot function, focusing specifically on thrombin burst and clot strength. In our observation, patients with the lowest MA-R time ratio (MA-R1) had significantly increased mortality compared with all other groups (45.5% MA-R1 compared with <25% in MA-R2 to MA-R3, and 9.1% in MA-R4; p < 0.003). Maximum amplitude and R-time may prove highly useful to predict at-risk patients early, when other physiologic indicators are absent.

Keywords: coagulopathy, trauma, thromboelastography, mortality

Procedia PDF Downloads 137
826 Synthesis and Analytical Characterisation of Polymer-Silica Nanoparticles Composite for the Protection and Preservation of Stone Monuments

Authors: Sayed M. Ahmed, Sawsan S. Darwish, Nagib A. Elmarzugi, Mohammad A. Al-Dosari, Mahmoud A. Adam, Nadia A. Al-Mouallimi

Abstract:

Historical stone surfaces and architectural heritage may undergo unwanted changes due to the exposure to many physical and chemical deterioration factors, the innovative properties of the nano - materials can have advantageous application in the restoration and conservation of the cultural heritage with relation to the tailoring of new products for protection and consolidation of stone. The current work evaluates the effectiveness of inorganic compatible treatments; based on nanosized particles of silica (SiO2) dispersed in silicon based product, commonly used as a water-repellent/ consolidation for the construction materials affected by different kinds of decay. The nanocomposites obtained by dispersing the silica nanoparticles in polymeric matrices SILRES® BS OH 100 (solventless mixtures of ethyl silicates), in order to obtain a new nanocomposite, with hydrophobic and consolidation properties, to improve the physical and mechanical properties of the stone material. The nanocomposites obtained and pure SILRES® BS OH 100 were applied by brush Experimental stone blocks. The efficacy of the treatments has been evaluated after consolidation and artificial Thermal aging, through capillary water absorption measurements, Ultraviolet-light exposure to evaluate photo-induced and the hydrophobic effects of the treated surface, Scanning electron microscopy (SEM) examination is performed to evaluate penetration depth, re-aggregating effects of the deposited phase and the surface morphology before and after artificialaging. Sterio microscopy investigation is performed to evaluate the resistant to the effects of the erosion, acids and salts. Improving of stone mechanical properties were evaluated by compressive strength tests, colorimetric measurements were used to evaluate the optical appearance. All the results get together with the apparent effect that, silica/polymer nanocomposite is efficient material for the consolidation of artistic and architectural sandstone monuments, completely compatible, enhanced the durability of sandstone toward thermal and UV aging. In addition, the obtained nanocomposite improved the stone mechanical properties and the resistant to the effects of the erosion, acids and salts compared to the samples treated with pure SILRES® BS OH 100 without silica nanoparticles.

Keywords: colorimetric measurements, compressive strength, nanocomposites, porous stone consolidation, silica nanoparticles, sandstone

Procedia PDF Downloads 230
825 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 34
824 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator

Authors: Yi-Chun Kuo, Jo-Chieh Lin

Abstract:

The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.

Keywords: supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling

Procedia PDF Downloads 291
823 Translation and Adaptation of the Assessment Instrument “Kiddycat” for European Portuguese

Authors: Elsa Marta Soares, Ana Rita Valente, Cristiana Rodrigues, Filipa Gonçalves

Abstract:

Background: The assessment of feelings and attitudes of preschool children in relation to stuttering is crucial. Negative experiences can lead to anxiety, worry or frustration. To avoid the worsening of attitudes and feelings related to stuttering, it is important the early detection in order to intervene as soon as possible through an individualized intervention plan. Then it is important to have Portuguese instruments that allow this assessment. Aims: The aim of the present study is to realize the translation and adaptation of the Communication Attitude Test for Children in Preschool Age and Kindergarten (KiddyCat) for EP. Methodology: For the translation and adaptation process, a methodological study was carried out with the following steps: translation, back translation, assessment by a committee of experts and pre-test. This abstract describes the results of the first two phases of this process. The translation was accomplished by two bilingual individuals without experience in health and any knowledge about the instrument. One of them was an English teacher and the other one a Translator. The back-translation was conducted by two Senior Class Teachers that live in United Kingdom without any knowledge in health and about the instrument. Results and Discussion: In translation there were differences in semantic equivalences of various expressions and concepts. A discussion between the two translators, mediated by the researchers, allowed to achieve the consensus version of the translated instrument. Taking into account the original version of KiddyCAT the results demonstrated that back-translation versions were similar to the original version of this assessment instrument. Although the back-translators used different words, they were synonymous, maintaining semantic and idiomatic equivalences of the instrument’s items. Conclusion: This project contributes with an important resource that can be used in the assessment of feelings and attitudes of preschool children who stutter. This was the first phase of the research; expert panel and pretest are being developed. Therefore, it is expected that this instrument contributes to an holistic therapeutic intervention, taking into account the individual characteristics of each child.

Keywords: assessment, feelings and attitudes, preschool children, stuttering

Procedia PDF Downloads 127
822 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors

Authors: Dylan Elliott, Katya Pertsova

Abstract:

Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.

Keywords: BERT, grammatical error correction, preposition error detection, prepositions

Procedia PDF Downloads 120
821 Consumption and Diffusion Based Model of Tissue Organoid Development

Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov

Abstract:

In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.

Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid

Procedia PDF Downloads 291
820 Non-Newtonian Fluid Flow Simulation for a Vertical Plate and a Square Cylinder Pair

Authors: Anamika Paul, Sudipto Sarkar

Abstract:

The flow behaviour of non-Newtonian fluid is quite complicated, although both the pseudoplastic (n < 1, n being the power index) and dilatant (n > 1) fluids under this category are used immensely in chemical and process industries. A limited research work is carried out for flow over a bluff body in non-Newtonian flow environment. In the present numerical simulation we control the vortices of a square cylinder by placing an upstream vertical splitter plate for pseudoplastic (n=0.8), Newtonian (n=1) and dilatant (n=1.2) fluids. The position of the upstream plate is also varied to calculate the critical distance between the plate and cylinder, below which the cylinder vortex shedding suppresses. Here the Reynolds number is considered as Re = 150 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid), which comes under laminar periodic vortex shedding regime. The vertical plate is having a dimension of 0.5a × 0.05a and it is placed at the cylinder centre-line. Gambit 2.2.30 is used to construct the flow domain and to impose the boundary conditions. In detail, we imposed velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. The unsteady 2-D Navier Stokes equations in fully conservative form are then discretized in second-order spatial and first-order temporal form. These discretized equations are then solved by Ansys Fluent 14.5 implementing SIMPLE algorithm written in finite volume method. Here, fine meshing is used surrounding the plate and cylinder. Away from the cylinder, the grids are slowly stretched out in all directions. To get an account of mesh quality, a total of 297 × 208 grid points are used for G/a = 3 (G being the gap between the plate and cylinder) in the streamwise and flow-normal directions respectively after a grid independent study. The computed mean flow quantities obtained from Newtonian flow are agreed well with the available literatures. The results are depicted with the help of instantaneous and time-averaged flow fields. Qualitative and quantitative noteworthy differences are obtained in the flow field with the changes in rheology of fluid. Also, aerodynamic forces and vortex shedding frequencies differ with the gap-ratio and power index of the fluid. We can conclude from the present simulation that fluent is capable to capture the vortex dynamics of unsteady laminar flow regime even in the non-Newtonian flow environment.

Keywords: CFD, critical gap-ratio, splitter plate, wake-wake interactions, dilatant, pseudoplastic

Procedia PDF Downloads 93
819 Comparison and Effectiveness of Cranial Electrical Stimulation Treatment, Brain Training and Their Combination on Language and Verbal Fluency of Patients with Mild Cognitive Impairment: A Single Subject Design

Authors: Firoozeh Ghazanfari, Kourosh Amraei, Parisa Poorabadi

Abstract:

Mild cognitive impairment is one of the neurocognitive disorders that go beyond age-related decline in cognitive functions, but in fact, it is not so severe which affects daily activities. This study aimed to investigate and compare the effectiveness of treatment with cranial electrical stimulation, brain training and their double combination on the language and verbal fluency of the elderly with mild cognitive impairment. This is a single-subject method with comparative intervention designs. Four patients with a definitive diagnosis of mild cognitive impairment by a psychiatrist were selected via purposive and convenience sampling method. Addenbrooke's Cognitive Examination Scale (2017) was used to assess language and verbal fluency. Two groups were formed with different order of cranial electrical stimulation treatment, brain training by pencil and paper method and their double combination, and two patients were randomly replaced in each group. The arrangement of the first group included cranial electrical stimulation, brain training, double combination and the second group included double combination, cranial electrical stimulation and brain training, respectively. Treatment plan included: A1, B, A2, C, A3, D, A4, where electrical stimulation treatment was given in ten 30-minutes sessions (5 mA and frequency of 0.5-500 Hz) and brain training in ten 30-minutes sessions. Each baseline lasted four weeks. Patients in first group who first received cranial electrical stimulation treatment showed a higher percentage of improvement in the language and verbal fluency subscale of Addenbrooke's Cognitive Examination in comparison to patients of the second group. Based on the results, it seems that cranial electrical stimulation with its effect on neurotransmitters and brain blood flow, especially in the brain stem, may prepare the brain at the neurochemical and molecular level for a better effectiveness of brain training at the behavioral level, and the selective treatment of electrical stimulation solitude in the first place may be more effective than combining it with paper-pencil brain training.

Keywords: cranial electrical stimulation, treatment, brain training, verbal fluency, cognitive impairment

Procedia PDF Downloads 64
818 Low Pertussis Vaccine Coverage Rates among Polish Nurses

Authors: Aneta Nitsch-Osuch, Sylwia Dyk, Izabela Gołebiak

Abstract:

Background. Since 2014 the pertussis vaccine is recommended to Polish health care workers who have close contacts with infants. Although this recommendation is implemented into the National Immunization Programme, its realization has remained unknown. The Purpose: The aim of the study, conducted at the department of Social Medicine and Public Health (Medical University of Warsaw, Poland), was to describe a perception, knowledge and coverage rates regarding pertussis vaccination among nursing staff. According to the authors' knowledge, it was the first study related to this topic in our country. Material and Methods: A total number of 543 nurses who work at pediatric or neonatal wards was included into the study (501 women and 42 men), average age was 47 years. All nurses were asked to fulfill the anonymous survey, previously validated. Results: 1. Coverage rates: The analysis of results revealed that only 4% of responders reported they were vaccinated with Tdpa within past 10 years, while 8% declared they would plan the vaccine in the future. 35% of responders would consider the Tdpa vaccine whether there is some kind of the reimbursement. 2. Perception and knowledge of the disease and vaccination: The majority (82%) of nurses did not recognize pertussis as a re-emerging infectious disease. 54% of them believed that obligatory vaccinations in the childhood protect against the disease and the protection is a life-long one. Only 15% of nurses considered pertussis as a possible nosocomial infection. The current epidemiology of the disease was known to 6% of responders, while 24% of them were familiar with pertussis vaccination schedules for infants, children and adolescents, but only 9% of responders knew that adults older than 19 years are recommended to be vaccinated with Tdpa every 10 years. Many nurses (82%) would expect more educational activities related to pertussis and methods of its prophylaxis. Conclusions: The pertussis vaccine coverage rate among Polish nurses is extremely low. This is a result of not enough knowledge about the disease and its prevention. Educational activities addressed to health care workers and reimbursement of the pertussis vaccine are required to improve awareness and increase of vaccine coverage rates in the future.

Keywords: coverage, nurse, pertussis, vaccine

Procedia PDF Downloads 186
817 Role of Community Based Forest Management to Address Climate Change Problem: A Case of Nepalese Community Forestry

Authors: Bikram Jung Kunwar

Abstract:

Forests have central roles in climate change. The conservation of forests sequestrates the carbon from the atmosphere and also regulates the carbon cycle. However, knowingly and unknowingly the world’s forests were deforested and degraded annually at the rate of 0.18% and emitted the carbon to the atmosphere. The IPCC reports claimed that the deforestation and forest degradation accounts 1/5th of total carbon emission, which is second position after fossil fuels. Since 1.6 billion people depend on varying degree on forests for their daily livelihood, not all deforestation are undesirable. Therefore, to conserve the forests and find the livelihood opportunities for forest surrounding people is prerequisites to address the climate change problems especially in developing countries, and also a growing concern to the forestry sector researchers, planners and policy makers. The study examines the role of community based forest management in carbon mitigation and adaptation taking the examples of Nepal’s community forestry program. In the program, the government hands over a part of national forests to the local communities with sole forest management authorities. However, the government itself retained the ownership rights of forestland. Local communities organized through a local institution called Community Forest User Group (CFUG) managed the forests. They also formed an operational plan with technical prescriptions and a constitution with forest management rules and regulations. The implementation results showed that the CFUGs are not only found effective to organize the local people and construct a local institution to forest conservation and management activities, but also they are able to collect a community fund from the sale of forest products and carried out various community development activities. These development activities have decisive roles to improve the livelihood of forest surrounding people and eventually to address the climate change problems.

Keywords: climate change, community forestry, local institution, Nepal

Procedia PDF Downloads 271
816 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition

Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang

Abstract:

Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.

Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model

Procedia PDF Downloads 83
815 Displacement Based Design of a Dual Structural System

Authors: Romel Cordova Shedan

Abstract:

The traditional seismic design is the methodology of Forced Based Design (FBD). The Displacement Based Design (DBD) is a seismic design that considers structural damage to achieve a failure mechanism of the structure before the collapse. It is easier to quantify damage of a structure with displacements rather than forces. Therefore, a structure to achieve an inelastic displacement design with good ductility, it is necessary to be damaged. The first part of this investigation is about differences between the methodologies of DBD and FBD with some DBD advantages. In the second part, there is a study case about a dual building 5-story, which is regular in plan and elevation. The building is located in a seismic zone, which acceleration in firm soil is 45% of the acceleration of gravity. Then it is applied both methodologies into the study case to compare its displacements, shear forces and overturning moments. In the third part, the Dynamic Time History Analysis (DTHA) is done, to compare displacements with DBD and FBD methodologies. Three accelerograms were used and the magnitude of the acceleration scaled to be spectrum compatible with design spectrum. Then, using ASCE 41-13 guidelines, the hinge plastics were assigned to structure. Finally, both methodologies results about study case are compared. It is important to take into account that the seismic performance level of the building for DBD is greater than FBD method. This is due to drifts of DBD are in the order of 2.0% and 2.5% comparing with FBD drifts of 0.7%. Therefore, displacements of DBD is greater than the FBD method. Shear forces of DBD result greater than FBD methodology. These strengths of DBD method ensures that structure achieves design inelastic displacements, because those strengths were obtained due to a displacement spectrum reduction factor which depends on damping and ductility of the dual system. Also, the displacements for the study case for DBD results to be greater than FBD and DTHA. In that way, it proves that the seismic performance level of the building for DBD is greater than FBD method. Due to drifts of DBD which are in the order of 2.0% and 2.5% compared with little FBD drifts of 0.7%.

Keywords: displacement-based design, displacement spectrum reduction factor, dynamic time history analysis, forced based design

Procedia PDF Downloads 206
814 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China

Authors: Linyao Qiu, Zhiqiang Du

Abstract:

As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.

Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service

Procedia PDF Downloads 275
813 Enhancing Health Information Management with Smart Rings

Authors: Bhavishya Ramchandani

Abstract:

A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.

Keywords: smart ring, healthcare, electronic wearable, emerging technology

Procedia PDF Downloads 34
812 Factors Determining the Vulnerability to Occupational Health Risk and Safety of Call Center Agents in the Philippines

Authors: Lito M. Amit, Venecio U. Ultra, Young-Woong Song

Abstract:

The business process outsourcing (BPO) in the Philippines is expanding rapidly attracting more than 2% of total employment. Currently, the BPO industry is confronted with several issues pertaining to sustainable productivity such as meeting the staffing gap, high rate of employees’ turnover and workforce retention, and the occupational health and safety (OHS) of call center agents. We conducted a survey of OHS programs and health concerns among call center agents in the Philippines and determined the sociocultural factors that affect the vulnerability of call center agents to occupational health risks and hazards. The majority of the agents affirmed that OHS are implemented and OHS orientation and emergency procedures were conducted at employment initiations, perceived favorable and convenient working environment except for occasional noise disturbances and acoustic shock, visual, and voice fatigues. Male agents can easily adjust to the demands and changes in their work environment and flexible work schedules than female agents. Female agents have a higher tendency to be pressured and humiliated by low work performance, experience a higher incidence of emotional abuse, psychological abuse, and experience more physical stress than male agents. The majority of the call center agents had a night-shift schedule and regardless of other factors, night shift work brings higher stress to agents. While working in a call center, higher incidence of headaches and insomnia, burnout, suppressed anger, anxiety, and depressions were experienced by female, younger (21-25 years old) and those at night shift than their counterpart. Most common musculoskeletal disorders include body pain in the neck, shoulders and back; and hand and wrist disorders and these are commonly experienced by female and younger workers. About 30% experienced symptoms of cardiovascular and gastrointestinal disorders and weakened immune systems. Overall, these findings have shown the variable vulnerability by a different subpopulation of call center agents and are important in the occupational health risk prevention and management towards a sustainable human resource for BPO industry in the Philippines.

Keywords: business process outsourcing industry, health risk of call center agents, socio-cultural determinants, Philippines

Procedia PDF Downloads 467
811 Biological Control of Woolly Apple Aphid, Eriosoma Lanigerum (Hausmann) in the Nursery Production of Spruce

Authors: Snezana Rajkovic, Miroslava Markovic, Ljubinko Rakonjac, Aleksandar Lucic, Radoslav Rajkovic

Abstract:

Woolly apple aphid, Eriosoma lanigerum (Hausmann) is a widely distributed pest of apple trees, especially where its parasites have been killed by insecticides. It can also be found on pear, hawthorn, mountain ash, and elm trees. Relatively small to medium-sized aphids, characterized by a reddish-brown body, a blood-red stain when crushed and a fluffy, flocculent wax covering. Specialized dermal glands produce the characteristic fluffy or powdery wax, which gives E. lanigerum its characteristic 'woolly' appearance. Also, woolly apple aphid is a problemm in nursery production of spure.The experiments were carried out in the nursery “Nevade” in Gornji Milanovac, "Srbijasume" on the spruce seedlings, aged 2 years. In this study, organic insecticide King Bo, aqueous solution (a. i. oxymatrine 0.2% + psoralen 0.4%), manufacturer Beijing Kingbo Biotech Co. Ltd., Beijing, China. extracted from plants and used as pesticides in nursery production were investigated. King Bo, bioinsecticide is manufactured from refined natural herbal extract several wild medicinal plants, such as Sophora flavescens Ait, Veratrum nigrum L, A. Carmichael, etc. Oxymatrine 2.4 SL is a stomach poison that has antifeeding and repellent action. This substance stimulates development and growth in a host plant and also controls the appearance of downy mildew.The trials were set according to instructions of methods-monitoring of changes in the number of larvae and adults compared to before treatment. The treatment plan was made according to fully randomized block design. The experiment was conducted in four repetitions. The basic plot had the area of 25 m2. Phytotoxicity was estimated by PP methods 1/135 (2), the intensity of infection according to Towsend-Heuberger, the efficiency by Abbott, the analysis of variance with Ducan test and PP/181 (2).

Keywords: bioinsecticide, efficacy, nurssery production, woolly apple aphid

Procedia PDF Downloads 502
810 Uncertainty and Multifunctionality as Bridging Concepts from Socio-Ecological Resilience to Infrastructure Finance in Water Resource Decision Making

Authors: Anita Lazurko, Laszlo Pinter, Jeremy Richardson

Abstract:

Uncertain climate projections, multiple possible development futures, and a financing gap create challenges for water infrastructure decision making. In contrast to conventional predict-plan-act methods, an emerging decision paradigm that enables social-ecological resilience supports decisions that are appropriate for uncertainty and leverage social, ecological, and economic multifunctionality. Concurrently, water infrastructure project finance plays a powerful role in sustainable infrastructure development but remains disconnected from discourse in socio-ecological resilience. At the time of research, a project to transfer water from Lesotho to Botswana through South Africa in the Orange-Senqu River Basin was at the pre-feasibility stage. This case was analysed through documents and interviews to investigate how uncertainty and multifunctionality are conceptualised and considered in decisions for the resilience of water infrastructure and to explore bridging concepts that might allow project finance to better enable socio-ecological resilience. Interviewees conceptualised uncertainty as risk, ambiguity and ignorance, and multifunctionality as politically-motivated shared benefits. Numerous efforts to adopt emerging decision methods that consider these terms were in use but required compromises to accommodate the persistent, conventional decision paradigm, though a range of future opportunities was identified. Bridging these findings to finance revealed opportunities to consider a more comprehensive scope of risk, to leverage risk mitigation measures, to diffuse risks and benefits over space, time and to diverse actor groups, and to clarify roles to achieve multiple objectives for resilience. In addition to insights into how multiple decision paradigms interact in real-world decision contexts, the research highlights untapped potential at the juncture between socio-ecological resilience and project finance.

Keywords: socio-ecological resilience, finance, multifunctionality, uncertainty

Procedia PDF Downloads 100
809 An Intelligent Steerable Drill System for Orthopedic Surgery

Authors: Wei Yao

Abstract:

A steerable and flexible drill is needed in orthopaedic surgery. For example, osteoarthritis is a common condition affecting millions of people for which joint replacement is an effective treatment which improves the quality and duration of life in elderly sufferers. Conventional surgery is not very accurate. Computer navigation and robotics can help increase the accuracy. For example, In Total Hip Arthroplasty (THA), robotic surgery is currently practiced mainly on acetabular side helping cup positioning and orientation. However, femoral stem positioning mostly uses hand-rasping method rather than robots for accurate positioning. The other case for using a flexible drill in surgery is Anterior Cruciate Ligament (ACL) Reconstruction. The majority of ACL Reconstruction failures are primarily caused by technical mistakes and surgical errors resulting from drilling the anatomical bone tunnels required to accommodate the ligament graft. The proposed new steerable drill system will perform orthopedic surgery through curved tunneling leading to better accuracy and patient outcomes. It may reduce intra-operative fractures, dislocations, early failure and leg length discrepancy by making possible a new level of precision. This technology is based on a robotically assisted, steerable, hand-held flexible drill, with a drill-tip tracking device and a multi-modality navigation system. The critical differentiator is that this robotically assisted surgical technology now allows the surgeon to prepare 'patient specific' and more anatomically correct 'curved' bone tunnels during orthopedic surgery rather than drilling straight holes as occurs currently with existing surgical tools. The flexible and steerable drill and its navigation system for femoral milling in total hip arthroplasty had been tested on sawbones to evaluate the accuracy of the positioning and orientation of femoral stem relative to the pre-operative plan. The data show the accuracy of the navigation system is better than traditional hand-rasping method.

Keywords: navigation, robotic orthopedic surgery, steerable drill, tracking

Procedia PDF Downloads 146
808 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability

Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto

Abstract:

Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.

Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT

Procedia PDF Downloads 767
807 Mesenteric Ischemia Presenting as Acalculous Cholecystitis: A Case Review of a Rare Complication and Aberrant Anatomy

Authors: Joshua Russell, Omar Zubair, Reuben Ndegwa

Abstract:

Introduction: Mesenteric ischemia is an uncommon condition that can be challenging to diagnose in the acute setting, with the potential for significant morbidity and mortality. Very rarely has acute acalculous cholecystitis been described in the setting of mesenteric ischemia. Case: This was the case in a 78-year-old male, who initially presented with clinical and radiological evidence of small bowel obstruction, thought likely secondary to malignancy. The patient had a 6-week history of anorexia, worsening lower abdominal pain, and ~30kg of unintentional weight loss over a 12-month period and a CT- scan demonstrated a transition point in the distal ileum. The patient became increasingly hemodynamically unstable and peritonitic, and an emergency laparotomy was performed. Intra-operatively, however, no obvious transition point was identified, and instead, the gallbladder was markedly gangrenous and oedematous, consistent with acalculous cholecystitis. An open total cholecystectomy was subsequently performed. The patient was admitted to the Intensive Care Unit post-operatively and continued to deteriorate over the proceeding 48 hours, with two re-look laparotomies demonstrating progressively worsening bowel ischemia, initially in the distribution of the superior mesenteric artery and then the coeliac trunk. On review, the patient was found to have an aberrant right hepatic artery arising from the superior mesenteric artery. The extent of ischemia was considered non-survivable, and the patient was palliated. Discussion: Multiple theories currently exist for the underlying pathophysiology of acalculous cholecystitis, including biliary stasis, sepsis, and ischemia. This case lends further support to ischemia as the underlying etiology of acalculous cholecystitis. This is particularly the case when considered in the context of the patient’s aberrant right hepatic artery arising from the superior mesenteric artery, which occurs in 11-14% of patients. Conclusion: This case report adds further insight to the debate surrounding the pathophysiology of acalculous cholecystitis. It also presents acalculous cholecystitis as a complication of mesenteric ischemia that should always be considered, especially in the elderly patient and in the context of relatively common anatomical variations. Furthermore, the case brings to attention the importance of maintaining dynamic working diagnoses in the setting of evolving pathophysiology and clinical presentations.

Keywords: acalculous cholecystitis, anatomical variation, general surgery, mesenteric ischemia

Procedia PDF Downloads 164
806 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 94
805 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 368
804 Non-Conformance Clearance through an Intensified Mentorship towards ISO 15189 Accreditation: The Case of Jimma and Hawassa Hospital Microbiology Laboratories, Ethiopia

Authors: Dawit Assefa, Kassaye Tekie, Gebrie Alebachew, Degefu Beyene, Bikila Alemu, Naji Mohammed, Asnakech Agegnehu, Seble Tsehay, Geremew Tasew

Abstract:

Background: Implementation of a Laboratory Quality Management System (LQMS) is critical to ensure accurate, reliable, and efficient laboratory testing of antimicrobial resistance (AMR). However, limited LQMS implementation and progress toward accreditation in the AMR surveillance laboratory testing setting exist in Ethiopia. By addressing non-conformances (NCs) and working towards accreditation, microbiology laboratories can improve the quality of their services, increase staff competence, and contribute to mitigate the spread of AMR. Methods: Using standard ISO 15189 horizontal and vertical assessment checklists, certified assessors identified NCs at Hawassa and Jimma Hospital microbiology laboratories. The Ethiopian Public Health Institute AMR mentors and IDDS staff prioritized closing the NCs through the implementation of an intensified mentorship program that included ISO 15189 orientation training, resource allocation, and action plan development. Results: For the two facilities to clear their NCs, an intensified mentorship approach was adopted by providing ISO 15189 orientation training, provision of buffer reagents, controls, standards, and axillary equipment, and facilitating equipment maintenance and calibration. Method verification and competency assessment were also conducted along with the implementation of standard operating procedures and recommended corrective actions. This approach enhanced the laboratory's readiness for accreditation. After addressing their NCs, the two laboratories applied to Ethiopian Accreditation Services for ISO 15189 accreditation. Conclusions: Clearing NCs through the implementation of intensified mentorship was crucial in preparing the two laboratories for accreditation and improving quality laboratory test results. This approach can guide other microbiology laboratories’ accreditation attainment efforts.

Keywords: non-conformance clearance, intensified mentorship, accreditation, ISO 15189

Procedia PDF Downloads 43
803 Doing Bad for a Greater Good: Moral Disengagement in Social and Commercial Entrepreneurial Contexts

Authors: Thorsten Auer, Sumaya Islam, Sabrina Plaß, Colin Wooldridge

Abstract:

Whether individuals are more likely to forgo some ethical values if it is for a “great” social mission remains questionable. Research interest in the mechanism of moral disengagement has risen sharply in the organizational context over the last decades. Moral disengagement provides an explanatory approach to why individuals decide against their moral intent and describes the tendency to make unethical decisions due to a lack of self-regulation given various actions and their consequences. In our study, we examine the differences between individual decision-making given a commercial and social entrepreneurial context. Thereby, we investigate whether individuals in a social entrepreneurial context, characterized by pro-social goals and purpose beyond profit maximization, tend to make more or less “unethical” decisions in trade-off situations than those given a profit-focused commercial, entrepreneurial context. While a general priming effect may explain the tendency for individuals to make less unethical decisions given a social context, it remains unclear how individuals decide given a trade-off in that specific context. The trade-off in our study is characterized by the option to decide (un-) ethically to enhance the business purpose (in the social context, a social purpose, in the commercial context, a profit-maximization purpose). To investigate which characteristics of the context –and specifically of a trade-off – lead individuals to disregard and override their ethical values for a “greater good”, we design a conjoint analysis. This approach allows us to vary the attributes and scenarios and to test which attributes of a trade-off increase the probability of making an unethical choice. We add survey data to examine the individual propensity to morally disengage as an influencing factor to prefer certain attributes. Currently, we are in the final process of designing the conjoint analysis and plan to conduct the study by December 2022. We contribute to a better understanding of the role of moral disengagement in individual decision-making in a (social) entrepreneurial trade-off.

Keywords: moral disengagement, social entrepreneurship, unethical decision, conjoint analysis

Procedia PDF Downloads 66
802 Community Participation in Decentralized Management of Natural Resources in the Sudano-Sahelian Zone of West Africa

Authors: Clarisse Umutoni, Augustine Ayantunde, Matthew Turner, Germain J. Sawadogo

Abstract:

Decentralized governance of natural resources is considered one of the key strategies for promoting sustainable management of natural resources at local level. The rationale behind decentralization of natural resources is that local populations are both better situated and more highly motivated than outside agencies to manage the resources in an ecologically and economically sustainable manner. Effective decentralized natural resource management requires strong local natural resource institutions. Therefore, strengthening local institutions governing natural resource management is essential to promoting strong participation of local communities in managing their resources. This paper investigated the existing local institutions (rules, norms and or local conventions) governing the management of natural resources and forms of community participation in the development of these natural resource institutions. Group discussions and individual interviews were conducted to collect data. Our findings showed significant variation within the study sites regarding the level of knowledge of existing local rules and norms governing the management of natural resources by the respondents. The results also show that participation was dominated by a small group of individuals, often community leaders and elites. The results suggest that women are marginalized. In general, factors which influence the level of participation include; age, year of residence in the community, gender and education level. This study also highlights the strengths of local natural resource institutions especially if enforced. Presently, the big challenge that faces the institutions governing natural resource use in the study area is the system of representativeness in the community in the development of local rules and norms as community leaders and household heads often dominate, which does not encourage active participation of community members. Therefore, for effective implementation of local natural resource institutions, the interest of key natural resource users should be taken into account. It is also important to promote rules and norms that attempt to protect or strengthen women’s access to natural resources in the community.

Keywords: decentralization, land use plan, local institutions, Mali

Procedia PDF Downloads 365
801 Development of an Experiment for Impedance Measurement of Structured Sandwich Sheet Metals by Using a Full Factorial Multi-Stage Approach

Authors: Florian Vincent Haase, Adrian Dierl, Anna Henke, Ralf Woll, Ennes Sarradj

Abstract:

Structured sheet metals and structured sandwich sheet metals are three-dimensional, lightweight structures with increased stiffness which are used in the automotive industry. The impedance, a figure of resistance of a structure to vibrations, will be determined regarding plain sheets, structured sheets, and structured sandwich sheets. The aim of this paper is generating an experimental design in order to minimize costs and duration of experiments. The design of experiments will be used to reduce the large number of single tests required for the determination of correlation between the impedance and its influencing factors. Full and fractional factorials are applied in order to systematize and plan the experiments. Their major advantages are high quality results given the relatively small number of trials and their ability to determine the most important influencing factors including their specific interactions. The developed full factorial experimental design for the study of plain sheets includes three factor levels. In contrast to the study of plain sheets, the respective impedance analysis used on structured sheets and structured sandwich sheets should be split into three phases. The first phase consists of preliminary tests which identify relevant factor levels. These factor levels are subsequently employed in main tests, which have the objective of identifying complex relationships between the parameters and the reference variable. Possible post-tests can follow up in case additional study of factor levels or other factors are necessary. By using full and fractional factorial experimental designs, the required number of tests is reduced by half. In the context of this paper, the benefits from the application of design for experiments are presented. Furthermore, a multistage approach is shown to take into account unrealizable factor combinations and minimize experiments.

Keywords: structured sheet metals, structured sandwich sheet metals, impedance measurement, design of experiment

Procedia PDF Downloads 350