Search results for: time constraint
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18463

Search results for: time constraint

14503 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques

Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian

Abstract:

Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.

Keywords: data mining, k-means, road traffic accidents, Waze, Weka

Procedia PDF Downloads 418
14502 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States

Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi

Abstract:

The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.

Keywords: economic growth, energy demand, income, real GDP, urbanization, VECM

Procedia PDF Downloads 312
14501 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 178
14500 Ultrasonic Techniques to Characterize and Monitor Water-in-Oil Emulsion

Authors: E. A. Alshaafi, A. Prakash

Abstract:

Oil-water emulsions are commonly encountered in various industrial operations and at different stages of crude oil production and processing. Emulsions are often difficult to track and treat and can cause a number of costly problems which need to be avoided. The characteristics of the emulsion phase can vary with crude composition and types of impurities present in oil. The objectives of this study are the development of ultrasonic techniques to track and characterize emulsion phase generated during production and cleaning of crude oil. The position of emulsion layer is monitored with the help of ultrasonic probes suitably placed in the vessel. The sensitivity of the technique and its potential has been demonstrated based on extensive testing with different oil samples. The technique is also being developed to monitor emulsion phase characteristics such as stability, composition, and droplet size distribution. The ultrasonic parameters recorded are changes in acoustic velocity, signal attenuation and its frequency spectrum. Emulsion has been prepared with light mineral oil sample and the effects of various factors including mixing speed, temperature, surfactant, and solid particles concentrations have been investigated. The applied frequency for ultrasonic waves has been varied from 1 to 5 MHz to carry out a sensitivity analysis. Emulsion droplet structure is observed with optical microscopy and stability is examined by tracking the changes in ultrasonic parameters with time. A model based on ultrasonic attenuation spectroscopy is being developed and tested to track changes in droplet size distribution with time.

Keywords: ultrasonic techniques, emulsion, characterization, droplet size

Procedia PDF Downloads 175
14499 Teacher-Child Interactions within Learning Contexts in Prekindergarten

Authors: Angélique Laurent, Marie-Josée Letarte, Jean-Pascal Lemelin, Marie-France Morin

Abstract:

This study aims at exploring teacher-child interactions within learning contexts in public prekindergartens of the province of Québec (Canada). It is based on previous research showing that teacher-child interactions in preschools have direct and determining effects on the quality of early childhood education and could directly or indirectly influence child development. However, throughout a typical preschool day, children experience different learning contexts to promote their learning opportunities. Depending on these specific contexts, teacher-child interactions could vary, for example, between free play and shared book reading. Indeed, some studies have found that teacher-directed or child-directed contexts might lead to significant variations in teacher-child interactions. This study drew upon both the bioecological and the Teaching Through Interactions frameworks. It was conducted through a descriptive and correlational design. Fifteen teachers were recruited to participate in the study. At Time 1 in October, they completed a diary to report the learning contexts they proposed in their classroom during a typical week. At Time 2, seven months later (May), they were videotaped three times in the morning (two weeks’ time between each recording) during a typical morning class. The quality of teacher-child interactions was then coded with the Classroom Assessment Scoring System (CLASS) through the contexts identified. This tool measures three main domains of interactions: emotional support, classroom organization, and instruction support, and10 dimensions scored on a scale from 1 (low quality) to 7 (high quality). Based on the teachers’ reports, five learning contexts were identified: 1) shared book reading, 2) free play, 3) morning meeting, 4) teacher-directed activity (such as craft), and 5) snack. Based on preliminary statistical analyses, little variation was observed within the learning contexts for each domain of the CLASS. However, the instructional support domain showed lower scores during specific learning contexts, specifically free play and teacher-directed activity. Practical implications for how preschool teachers could foster specific domains of interactions depending on learning contexts to enhance children’s social and academic development will be discussed.

Keywords: teacher practices, teacher-child interactions, preschool education, learning contexts, child development

Procedia PDF Downloads 109
14498 Signal Amplification Using Graphene Oxide in Label Free Biosensor for Pathogen Detection

Authors: Agampodi Promoda Perera, Yong Shin, Mi Kyoung Park

Abstract:

The successful detection of pathogenic bacteria in blood provides important information for early detection, diagnosis and the prevention and treatment of infectious diseases. Silicon microring resonators are refractive-index-based optical biosensors that provide highly sensitive, label-free, real-time multiplexed detection of biomolecules. We demonstrate the technique of using GO (graphene oxide) to enhance the signal output of the silicon microring optical sensor. The activated carboxylic groups in GO molecules bind directly to single stranded DNA with an amino modified 5’ end. This conjugation amplifies the shift in resonant wavelength in a real-time manner. We designed a capture probe for strain Staphylococcus aureus of 21 bp and a longer complementary target sequence of 70 bp. The mismatched target sequence we used was of Streptococcus agalactiae of 70 bp. GO is added after the complementary binding of the probe and target. GO conjugates to the unbound single stranded segment of the target and increase the wavelength shift on the silicon microring resonator. Furthermore, our results show that GO could successfully differentiate between the mismatched DNA sequences from the complementary DNA sequence. Therefore, the proposed concept could effectively enhance sensitivity of pathogen detection sensors.

Keywords: label free biosensor, pathogenic bacteria, graphene oxide, diagnosis

Procedia PDF Downloads 469
14497 An Historical Revision of Change and Configuration Management Process

Authors: Expedito Pinto De Paula Junior

Abstract:

Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.

Keywords: changes, configuration management, historical, revision

Procedia PDF Downloads 201
14496 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM

Authors: Clement Leroy, Guillaume Boitel

Abstract:

This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.

Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery

Procedia PDF Downloads 205
14495 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 592
14494 The Effects of Different Types of Cement on the Permeability of Deep Mixing Columns

Authors: Mojebullah Wahidy, Murat Olgun

Abstract:

In this study, four different types of cement are used to investigate the permeability of DMC (Deep Mixing Column) in the clay. The clay used in this research is in the kaolin group, and the types of cement are; CEM I 42.5.R. normal portland cement, CEM II/A-M (P-L) pozzolan doped cement, CEM III/A 42.5 N blast furnace slag cement and DMFC-800 fine-grained portland cement. Firstly, some rheological tests are done on every cement, and a 0.9 water/cement ratio is selected as the appropriate ratio. This ratio is used to prepare the small-scale DMCs for all types of cement with %6, %9, %12, and %15, which are determined as the dry weight of the clay. For all the types of cement, three samples were prepared in every percentage and were kept on curing for 7, 14, and 28 days for permeability tests. As a result of the small-scale DMCs, permeability tests, a %12 selected for big-scale DMCs. A total of five big scales DMC were prepared by using a %12-cement and were kept for 28 days curing for permeability tests. The results of the permeability tests show that by increasing the cement percentage and curing time of all DMCs, the permeability coefficient (k) is decreased. Despite variable results in different cement ratios and curing time in general, samples treated by DMFC-800 fine-grained cement have the lowest permeability coefficient. Samples treated with CEM II and CEM I cement types were the second and third lowest permeable samples. The highest permeability coefficient belongs to the samples that were treated with CEM III cement type.

Keywords: deep mixing column, rheological test, DMFC-800, permeability test

Procedia PDF Downloads 79
14493 Design of Aesthetic Acoustic Metamaterials Window Panel Based on Sierpiński Fractal Triangle for Sound-silencing with Free Airflow

Authors: Sanjeet Kumar Singh, Shanatanu Bhattacharaya

Abstract:

Design of high- efficiency low, frequency (<1000Hz) soundproof window or wall absorber which is transparent to airflow is presented. Due to the massive rise in human population and modernization, environmental noise has significantly risen globally. Prolonged noise exposure can cause severe physiological and psychological symptoms like nausea, headaches, fatigue, and insomnia. There has been continuous growth in building construction and infrastructure like offices, bus stops, and airports due to urban population. Generally, a ventilated window is used for getting fresh air into the room, but at the same time, unwanted noise comes along. Researchers used traditional approaches like noise barrier mats in front of the window or designed the entire window using sound-absorbing materials. However, this solution is not aesthetically pleasing, and at the same time, it's heavy and not adequate for low-frequency noise shielding. To address this challenge, we design a transparent hexagonal panel based on Sierpiński fractal triangle, which is aesthetically pleasing, demonstrates normal incident sound absorption coefficient more than 0.96 around 700 Hz and transmission loss around 23 dB while maintaining e air circulation through triangular cutout. Next, we present a concept of fabrication of large acoustic panel for large-scale applications, which lead to suppressing the urban noise pollution.

Keywords: acoustic metamaterials, noise, functional materials, ventilated

Procedia PDF Downloads 82
14492 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 113
14491 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired

Procedia PDF Downloads 134
14490 Jejunostomy and Protective Ileostomy in a Patient with Massive Necrotizing Enterocolitis: A Case Report

Authors: Rafael Ricieri, Rogerio Barros

Abstract:

Objective: This study is to report a case of massive necrotizing enterocolitis in a six-month-old patient, requiring ileostomy and protective jejunostomy as a damage control measure in the first exploratory laparotomy surgery in massive enterocolitis without a previous diagnosis. Methods: This study is a case report of success in making and closing a protective jejunostomy. However, the low number of publications on this staged and risky measure of surgical resolution encouraged the team to study the indication and especially the correct time for closing the patient's protective jejunostomy. The main study instrument will be the six-month-old patient's medical record. Results: Based on the observation of the case described, it was observed that the time for the closure of the described procedure (protective jejunostomy) varies according to the level of compromise of the health status of your patient and of an individual of each person. Early closure, or failure to close, can lead to a favorable problem for the patient since several problems can result from this closure, such as new intestinal perforations, hydroelectrolyte disturbances. Despite the risk of new perforations, we suggest closing the protective jejunostomy around the 14th day of the procedure, thus keeping the patient on broad-spectrum antibiotic therapy and absolute fasting, thus reducing the chances of new intestinal perforations. Associated with the closure of the jejunostomy, a gastric tube for decompression is necessary, and care in an intensive care unit and electrolyte replacement is necessary to maintain the stability of the case.

Keywords: jejunostomy, ileostomy, enterocolitis, pediatric surgery, gastric surgery

Procedia PDF Downloads 84
14489 Strategies for Good Governance during Crisis in Higher Education

Authors: Naziema B. Jappie

Abstract:

Over the last 23 years leaders in government, political parties and universities have been spending much time on identifying and discussing various gaps in the system that impact systematically on students especially those from historically Black communities. Equity and access to higher education were two critical aspects that featured in achieving the transformation goals together with a funding model for those previously disadvantaged. Free education was not a feasible option for the government. Institutional leaders in higher education face many demands on their time and resources. Often, the time for crisis management planning or consideration of being proactive and preventative is not a standing agenda item. With many issues being priority in academia, people become complacent and think that crisis may not affect them or they will cross the bridge when they get to it. Historically South Africa has proven to be a country of militancy, strikes and protests in most industries, some leading to disastrous outcomes. Higher education was not different between October 2015 and late 2016 when the #Rhodes Must Fall which morphed into the # Fees Must Fall protest challenged the establishment, changed the social fabric of universities, bringing the sector to a standstill. Some institutional leaders and administrators were better at handling unexpected, high-consequence situations than others. At most crisis leadership is viewed as a situation more than a style of leadership which is usually characterized by crisis management. The objective of this paper is to show how institutions managed catastrophes of disastrous proportions, down through unexpected incidents of 2015/2016. The content draws on the vast past crisis management experience of the presenter and includes the occurrences of the recent protests giving an event timeline. Using responses from interviews with institutional leaders and administrators as well as students will ensure first-hand information on their experiences and the outcomes. Students have tasted the power of organized action and they demand immediate change, if not the revolt will continue. This paper will examine the approaches that guided institutional leaders and their crisis teams and sector crisis response. It will further expand on whether the solutions effectively changed governance in higher education or has it minimized the need for more protests. The conclusion will give an insight into the future of higher education in South Africa from a leadership perspective.

Keywords: crisis, governance, intervention, leadership, strategies, protests

Procedia PDF Downloads 147
14488 Northern Nigeria Vaccine Direct Delivery System

Authors: Evelyn Castle, Adam Thompson

Abstract:

Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.

Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines

Procedia PDF Downloads 373
14487 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 269
14486 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses

Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer

Abstract:

The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.

Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation

Procedia PDF Downloads 171
14485 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 328
14484 Energy Management Method in DC Microgrid Based on the Equivalent Hydrogen Consumption Minimum Strategy

Authors: Ying Han, Weirong Chen, Qi Li

Abstract:

An energy management method based on equivalent hydrogen consumption minimum strategy is proposed in this paper aiming at the direct-current (DC) microgrid consisting of photovoltaic cells, fuel cells, energy storage devices, converters and DC loads. The rational allocation of fuel cells and battery devices is achieved by adopting equivalent minimum hydrogen consumption strategy with the full use of power generated by photovoltaic cells. Considering the balance of the battery’s state of charge (SOC), the optimal power of the battery under different SOC conditions is obtained and the reference output power of the fuel cell is calculated. And then a droop control method based on time-varying droop coefficient is proposed to realize the automatic charge and discharge control of the battery, balance the system power and maintain the bus voltage. The proposed control strategy is verified by RT-LAB hardware-in-the-loop simulation platform. The simulation results show that the designed control algorithm can realize the rational allocation of DC micro-grid energy and improve the stability of system.

Keywords: DC microgrid, equivalent minimum hydrogen consumption strategy, energy management, time-varying droop coefficient, droop control

Procedia PDF Downloads 303
14483 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis

Authors: Serdal Pamuk, Irem Cay

Abstract:

Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.

Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function

Procedia PDF Downloads 156
14482 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce

Authors: Jangje Park, So Young Kim

Abstract:

The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.

Keywords: audio frequency, buzz noise, ground bounce, microphone grounding

Procedia PDF Downloads 136
14481 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 167
14480 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model

Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.

Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.

Procedia PDF Downloads 140
14479 Viability Study of the Use of Solar Energy for Water Heating in Homes in Brazil

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

The sun is an inexhaustible source and harnessing its potential both for heating and for power generation is one of the most promising and necessary alternatives, mainly due to environmental issues. However, it should be noted that this has always been present in the generation of energy on the planet, only indirectly, as it is responsible for virtually all other energy sources, such as: Generates the evaporation source of the water cycle, which allows the impoundment and the consequent generation of electricity (hydroelectricity); Winds are caused by large-scale atmospheric induction caused by solar radiation; Oil, coal and natural gas were generated from waste plants and animals that originally obtained the energy needed for its development of solar radiation. Thus, the idea of using solar energy for practical purposes for the benefit of man is not new, as it accompanies the story since the beginning of time, which means that the sun was always of utmost importance in the design of shelters, or homes is, constructed by taking into consideration the use of sunlight, practicing what was being lost through the centuries, until a time when the buildings started to be designed completely independent of the sun. However, the climatic rigors still needed to be fought, only artificially and today seen as unsustainable, with additional facilities fueled by energy consumption. This paper presents a study on the feasibility of using solar energy for heating water in homes, developing a simplified methodology covering the mode of operation of solar water heaters, solar potential existing alternative systems of Brazil, the international market, and barriers encountered.

Keywords: solar energy, solar heating, solar project, water heating

Procedia PDF Downloads 333
14478 Rhythm-Reading Success Using Conversational Solfege

Authors: Kelly Jo Hollingsworth

Abstract:

Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.

Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction

Procedia PDF Downloads 157
14477 Influence of Densification Process and Material Properties on Final Briquettes Quality from FastGrowing Willows

Authors: Peter Križan, Juraj Beniak, Ľubomír Šooš, Miloš Matúš

Abstract:

Biomass treatment through densification is very suitable and important technology before its effective energy recovery. Densification process of biomass is significantly influenced by various technological and also material parameters which are ultimately reflected on the final solid Biofuels quality. The paper deals with the experimental research of the relationship between technological and material parameters during densification of fast-growing trees, roundly fast-rowing willow. The main goal of presented experimental research is to determine the relationship between pressing pressure raw material fraction size from a final briquettes density point of view. Experimental research was realized by single-axis densification. The impact of fraction size with interaction of pressing pressure and stabilization time on the quality properties of briquettes was determined. These parameters interaction affects the final solid biofuels (briquettes) quality. From briquettes production point of view and also from densification machines constructions point of view is very important to know about mutual interaction of these parameters on final briquettes quality. The experimental findings presented here are showing the importance of mentioned parameters during the densification process.

Keywords: briquettes density, densification, fraction size, pressing pressure, stabilization time

Procedia PDF Downloads 368
14476 Workforce Optimization: Fair Workload Balance and Near-Optimal Task Execution Order

Authors: Alvaro Javier Ortega

Abstract:

A large number of companies face the challenge of matching highly-skilled professionals to high-end positions by human resource deployment professionals. However, when the professional list and tasks to be matched are larger than a few dozens, this process result is far from optimal and takes a long time to be made. Therefore, an automated assignment algorithm for this workforce management problem is needed. The majority of companies are divided into several sectors or departments, where trained employees with different experience levels deal with a large number of tasks daily. Also, the execution order of all tasks is of mater consequence, due to some of these tasks just can be run it if the result of another task is provided. Thus, a wrong execution order leads to large waiting times between consecutive tasks. The desired goal is, therefore, creating accurate matches and a near-optimal execution order that maximizes the number of tasks performed and minimizes the idle time of the expensive skilled employees. The problem described before can be model as a mixed-integer non-linear programming (MINLP) as it will be shown in detail through this paper. A large number of MINLP algorithms have been proposed in the literature. Here, genetic algorithm solutions are considered and a comparison between two different mutation approaches is presented. The simulated results considering different complexity levels of assignment decisions show the appropriateness of the proposed model.

Keywords: employees, genetic algorithm, industry management, workforce

Procedia PDF Downloads 168
14475 Satisfaction Level of Teachers on the Human Resource Management Practices

Authors: Mark Anthony A. Catiil

Abstract:

Teachers are the principal actors in the delivery of quality education to the learners. Unfortunately, as time goes by, some of them got low motivation at work. Absenteeism, tardiness, under time, and non-compliance to school policies are some of the end results. There is, therefore, a need to review the different human resource management practices of the school that contribute to teachers’ work satisfaction and motivation. Hence, this study determined the level of satisfaction of teachers on the human resource management practices of Gingoog City Comprehensive National High School. This mixed-methodology research was focused on the 45 teachers chosen using a stratified random sampling technique. Reliability-tested questionnaires, interviews, and focus group discussions were used to gather the data. Results revealed that the majority of the respondents are female, Teacher I, with MA units and have served for 11-20 years. Likewise, among the human resource management practices of the school, the respondents rated the lowest satisfaction on recruitment and selection (mean=2.15; n=45). This could mean that most of the recruitment and selection practices of the school are not well communicated, disseminated, and implemented. On the other hand, retirement practices of the school were rated with the highest satisfaction among the respondents (mean=2.73; n=45). This could mean that most of the retirement practices of the school are communicated, disseminated, implemented, and functional. It was recommended that the existing human resource management practices on recruitment and selection be reviewed to find out its deficiencies and possible improvement. Moreover, future researchers may also conduct a study between private and public schools in Gingoog City on the same topic for comparison.

Keywords: education, human resource management practices, satisfaction, teachers

Procedia PDF Downloads 129
14474 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 125