Search results for: gait time
15127 Water Sources in 3 Local Municipalities of O. R. Tambo District Municipality, South Africa: A Comparative Study
Authors: Betek Cecilia Kunseh, Musampa Christopher
Abstract:
Despite significant investment and important progress, access to safe potable water continues to be one of the most pressing challenges for rural communities in O R Tambo District Municipality. This is coupled with the low income of most residents and government's policy which obliges municipalities to supply basic water usually set at 6 kilolitres per month to each household free of charge. During the research, data was collected from three local municipalities of O. R. Tambo, i.e. King Sabata Dalindyebo, Mhlontlo and Ingquza Hill local municipalities. According to the result, significant differences exist between the sources of water in the different local municipalities from which data was collected. The chi square was use to calculated the differences between the sources of water and the calculated critical value of the District Municipality was 18.77 which is more than the stipulated critical value of 3.84. More people in Mhlontlo Local Municipality got water from the taps while a greater percentage of households in King Sataba Dalindyebo and Ingquza hill local municipalities got their water from the natural sources. 77% of the sample population complained that there have been no improvements in water provision because they still get water from natural sources and even the remaining 33% that were getting water from the taps still have to depend on natural sources because the taps are most of the time broken and it takes a long time to fix them.Keywords: availability, water, sources, supply
Procedia PDF Downloads 34115126 Enhanced Poly Fluoroalkyl Substances Degradation in Complex Wastewater Using Modified Continuous Flow Nonthermal Plasma Reactor
Authors: Narasamma Nippatlapallia
Abstract:
Communities across the world are desperate to get their environment free of toxic per-poly fluoroalkyl substances (PFAS) especially when these chemicals are in aqueous media. In the present study, two different chain length PFAS (PFHxA (C6), PFDA (C10)) are selected for degradation using a modified continuous flow nonthermal plasma. The results showed 82.3% PFHxA and 94.1 PFDA degradation efficiencies, respectively. The defluorination efficiency is also evaluated which is 28% and 34% for PFHxA and PFDA, respectively. The results clearly indicates that the structure of PFAS has a great impact on degradation efficiency. The effect of flow rate is studied. increase in flow rate beyond 2 mL/min, decrease in degradation efficiency of the targeted PFAS was noticed. PFDA degradation was decreased from 85% to 42%, and PFHxA was decreased to 32% from 64% with increase in flow rate from 2 to 5 mL/min. Similarly, with increase in flow rate the percentage defluorination was decreased for both C10, and C6 compounds. This observation can be attributed to mainly because of change in residence time (contact time). Real water/wastewater is a composition of various organic, and inorganic ions that may affect the activity of oxidative species such as đđ». radicals on the target pollutants. Therefore, it is important to consider radicals quenching chemicals to understand the efficiency of the reactor. In gas-liquid NTP discharge reactors đđ». , đđđ â , đ . , đ3, đ»2đ2, đ». are often considered as reactive species for oxidation and reduction of pollutants. In this work, the role played by two distinct đ .đ» Scavengers, ethanol and glycerol, on PFAS percentage degradation, and defluorination efficiency (i,e., fluorine removal) are measured was studied. The addition of scavenging agents to the PFAS solution diminished the PFAS degradation to different extents depending on the target compound molecular structure. In comparison with the degradation of only PFAS solution, the addition of 1.25 M ethanol inhibited C10, and C6 degradation by 8%, and 12%, respectively. This research was supported with energy efficiency, production rate, and specific yield, fluoride, and PFAS concentration analysis with respect to optimum hydraulic retention time (HRT) of the continuous flow reactor.Keywords: wastewater, PFAS, nonthermal plasma, mineralization, defluorination
Procedia PDF Downloads 2915125 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction
Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier
Abstract:
Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steelâs life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steelâs global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austeniteâs crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferriteâs microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloyâs elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (αâ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferriteâs hardness. APT results grant information about phasesâ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate αâ and nickel-rich particlesâ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of αâ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD modelsâ prediction and used to improve these calculations and employ them to predict 316 Nb propertiesâ change during the industrial process.Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing
Procedia PDF Downloads 9315124 Ix Operation for the Concentration of Low-Grade Uranium Leach Solution
Authors: Heba Ahmed Nawafleh
Abstract:
In this study, two commercial resins were evaluated to concentrate uranium from real solutions that were produced from analkaline leaching process of carbonate deposits. The adsorption was examined using a batch process. Different parameters were evaluated, including initial pH, contact time, temperature, adsorbent dose, and finally, uranium initial concentration. Both resins were effective and selective for uranium ions from the tested leaching solution. The adsorption isotherms data were well fitted for both resins using the Langmuir model. Thermodynamic functions (Gibbs free energy change ÎG, enthalpy change ÎH, and entropy change ÎS) were calculated for the adsorption of uranium. The result shows that the adsorption process is endothermic, spontaneous, and chemisorption processes took place for both resins. The kinetic studies showed that the equilibrium time for uranium ions is about two hours, where the maximum uptake levels were achieved. The kinetics studies were carried out for the adsorption of U ions, and the data was found to follow pseudo-second-order kinetics, which indicates that the adsorption of U ions was chemically controlled. In addition, the reusability (adsorption/ desorption) process was tested for both resins for five cycles, these adsorbents maintained removal efficiency close to first cycle efficiency of about 91% and 80%.Keywords: uranium, adsorption, ion exchange, thermodynamic and kinetic studies
Procedia PDF Downloads 9215123 Inverse Heat Conduction Analysis of Cooling on Run-Out Tables
Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi
Abstract:
In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.Keywords: inverse analysis, function specification, neural net works, particle swarm, run-out table
Procedia PDF Downloads 24015122 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 30015121 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia
Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez
Abstract:
Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPaâąs), honey (350-1750 mPaâąs) and pudding (>1750 mPaâąs). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NMÂź (NM), Multi-thickÂź (M), Nutilis PowderÂź (Nut), ResourceÂź (R), Thick&EasyÂź (TE) and VegenatÂź (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ÂșC-20ÂșC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)âtime (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPaâąs) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener
Procedia PDF Downloads 36815120 The Effect Of Flights Schedules On Airline Choice Model For International Round-Trip Flights
Authors: Claudia Munoz, Henry Laniado
Abstract:
In this research, the impact of outbound and return flight schedule preferences on airline choice for international trips is quantified. Several studies have used airline choice data to identify preferences and trade-offs of different air carrier service attributes, such as travel time, fare and frequencies. However, estimation of the effect return flight schedules have on airline choice for an international round-trip flight has not yet been studied in detail. The multinomial logit model found shows that airfare, travel time, arrival preference schedule in the outward journey, departure preference in the return journey and the schedule combination of round-trip flights are significantly affecting passenger choice behavior in international round-trip flights. it results indicated that return flight schedule preference plays a substantial role in air carrier choice and has a similar effect to outbound flight schedule preference. Thus, this study provides an analytical tool designed to provide a better understanding of international round-trip flight demand determinants and support carrier decisions.Keywords: flight schedule, airline choice, return flight, passenger choice behavior
Procedia PDF Downloads 1715119 Optimizing the Passenger Throughput at an Airport Security Checkpoint
Authors: Kun Li, Yuzheng Liu, Xiuqi Fan
Abstract:
High-security standard and high efficiency of screening seem to be contradictory to each other in the airport security check process. Improving the efficiency as far as possible while maintaining the same security standard is significantly meaningful. This paper utilizes the knowledge of Operation Research and Stochastic Process to establish mathematical models to explore this problem. We analyze the current process of airport security check and use the M/G/1 and M/G/k models in queuing theory to describe the process. Then we find the least efficient part is the pre-check lane, the bottleneck of the queuing system. To improve passenger throughput and reduce the variance of passengersâ waiting time, we adjust our models and use Monte Carlo method, then put forward three modifications: adjust the ratio of Pre-Check lane to regular lane flexibly, determine the optimal number of security check screening lines based on cost analysis and adjust the distribution of arrival and service time based on Monte Carlo simulation results. We also analyze the impact of cultural differences as the sensitivity analysis. Finally, we give the recommendations for the current process of airport security check process.Keywords: queue theory, security check, stochatic process, Monte Carlo simulation
Procedia PDF Downloads 20015118 Impact of Endogenous Risk Factors on Risk Cost in KSA PPP Projects
Authors: Saleh Alzahrani, Halim Boussabaine
Abstract:
The Public Private Partnership (PPP) contracts are produced taking into account the reason that the configuration, development, operation, and financing of an open undertaking is to be recompensed to a private gathering inside a solitary contractual structure. PPP venture dangers are ordinarily connected with the improvement and development of another resource and in addition its operation for a considerable length of time. Without a doubt, the most genuine outcomes of dangers amid the development period are value and time overwhelms. These occasions are amongst the most extensively utilized situations as a part of worth for cash investigation dangers. The wellsprings of danger change over the life cycle of a PPP venture. In customary acquirement, the general population segment ordinarily needs to cover all value trouble from these dangers. At any rate there is bounty confirmation to recommend that cost pain is a standard in a percentage of the tasks that are conveyed under customary obtainment. This paper means to research the effect of endogenous dangers on expense overwhelm in KSA PPP ventures. The paper displays a brief writing survey on PPP danger evaluating systems, and after that presents an affiliation model between danger occasions and expense invade in KSA. The paper finishes up with considerations for future examination.Keywords: PPP, risk pricing, impact of risk, Endogenous risks
Procedia PDF Downloads 45215117 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984
Authors: Jane E. Hill
Abstract:
Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ânot guilty by reason of insanityâ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as âcriminal lunaticsâ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the âproduct testâ by Sir Isaac Ray for insanity. The Durham Rule also known as the âproduct testâ, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define âmental diseaseâ or âproduct.â Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723
Procedia PDF Downloads 14315116 A Virtual Grid Based Energy Efficient Data Gathering Scheme for Heterogeneous Sensor Networks
Authors: Siddhartha Chauhan, Nitin Kumar Kotania
Abstract:
Traditional Wireless Sensor Networks (WSNs) generally use static sinks to collect data from the sensor nodes via multiple forwarding. Therefore, network suffers with some problems like long message relay time, bottle neck problem which reduces the performance of the network. Many approaches have been proposed to prevent this problem with the help of mobile sink to collect the data from the sensor nodes, but these approaches still suffer from the buffer overflow problem due to limited memory size of sensor nodes. This paper proposes an energy efficient scheme for data gathering which overcomes the buffer overflow problem. The proposed scheme creates virtual grid structure of heterogeneous nodes. Scheme has been designed for sensor nodes having variable sensing rate. Every node finds out its buffer overflow time and on the basis of this cluster heads are elected. A controlled traversing approach is used by the proposed scheme in order to transmit data to sink. The effectiveness of the proposed scheme is verified by simulation.Keywords: buffer overflow problem, mobile sink, virtual grid, wireless sensor networks
Procedia PDF Downloads 39115115 Effect of Piston and its Weight on the Performance of a Gun Tunnel via Computational Fluid Dynamics
Authors: A. A. Ahmadi, A. R. Pishevar, M. Nili
Abstract:
As the test gas in a gun tunnel is non-isentropically compressed and heated by a light weight piston. Here, first consideration is the optimum piston weight. Although various aspects of the influence of piston weight on gun tunnel performance have been studied, it is not possible to decide from the existing literature what piston weight is required for optimum performance in various conditions. The technique whereby the piston is rapidly brought to rest at the end of the gun tunnel barrel, and the resulted peak pressure is equal in magnitude to the final equilibrium pressure, is called the equilibrium piston technique. The equilibrium piston technique was developed to estimate the equilibrium piston mass; but this technique cannot give an appropriate estimate for the optimum piston weight. In the present work, a gun tunnel with diameter of 3 in. is described and its performance is investigated numerically to obtain the effect of piston and its weight. Numerical results in the present work are in very good agreement with experimental results. Significant influence of the existence of a piston is shown by comparing the gun tunnel results with results of a conventional shock tunnel in the same dimension and same initial condition. In gun tunnel, an increase of around 250% in running time is gained relative to shock tunnel. Also, Numerical results show that equilibrium piston technique is not a good way to estimate suitable piston weight and there will be a lighter piston which can increase running time of the gun tunnel around 60%.Keywords: gun tunnel, hypersonic flow, piston, shock tunnel
Procedia PDF Downloads 37315114 Real-Time Working Environment Risk Analysis with Smart Textiles
Authors: Jose A. Diaz-Olivares, Nafise Mahdavian, Farhad Abtahi, Kaj Lindecrantz, Abdelakram Hafid, Fernando Seoane
Abstract:
Despite new recommendations and guidelines for the evaluation of occupational risk assessments and their prevention, work-related musculoskeletal disorders are still one of the biggest causes of work activity disruption, productivity loss, sick leave and chronic work disability. It affects millions of workers throughout Europe, with a large-scale economic and social burden. These specific efforts have failed to produce significant results yet, probably due to the limited availability and high costs of occupational risk assessment at work, especially when the methods are complex, consume excessive resources or depend on self-evaluations and observations of poor accuracy. To overcome these limitations, a pervasive system of risk assessment tools in real time has been developed, which has the characteristics of a systematic approach, with good precision, usability and resource efficiency, essential to facilitate the prevention of musculoskeletal disorders in the long term. The system allows the combination of different wearable sensors, placed on different limbs, to be used for data collection and evaluation by a software solution, according to the needs and requirements in each individual working environment. This is done in a non-disruptive manner for both the occupational health expert and the workers. The creation of this solution allows us to attend different research activities that require, as an essential starting point, the recording of data with ergonomic value of very diverse origin, especially in real work environments. The software platform is here presented with a complimentary smart clothing system for data acquisition, comprised of a T-shirt containing inertial measurement units (IMU), a vest sensorized with textile electronics, a wireless electrocardiogram (ECG) and thoracic electrical bio-impedance (TEB) recorder and a glove sensorized with variable resistors, dependent on the angular position of the wrist. The collected data is processed in real-time through a mobile application software solution, implemented in commercially available Android-based smartphones and tablet platforms. Based on the collection of this information and its analysis, real-time risk assessment and feedback about postural improvement is possible, adapted to different contexts. The result is a tool which provides added value to ergonomists and occupational health agents, as in situ analysis of postural behavior can assist in a quantitative manner in the evaluation of work techniques and the occupational environment.Keywords: ergonomics, mobile technologies, risk assessment, smart textiles
Procedia PDF Downloads 11815113 A Portable Cognitive Tool for Engagement Level and Activity Identification
Authors: Terry Teo, Sun Woh Lye, Yufei Li, Zainuddin Zakaria
Abstract:
Wearable devices such as Electroencephalography (EEG) hold immense potential in the monitoring and assessment of a personâs task engagement. This is especially so in remote or online sites. Research into its use in measuring an individual's cognitive state while performing task activities is therefore expected to increase. Despite the growing number of EEG research into brain functioning activities of a person, key challenges remain in adopting EEG for real-time operations. These include limited portability, long preparation time, high number of channel dimensionality, intrusiveness, as well as level of accuracy in acquiring neurological data. This paper proposes an approach using a 4-6 EEG channels to determine the cognitive states of a subject when undertaking a set of passive and active monitoring tasks of a subject. Air traffic controller (ATC) dynamic-tasks are used as a proxy. The work found that when using the channel reduction and identifier algorithm, good trend adherence of 89.1% can be obtained between a commercially available BCI 14 channel Emotiv EPOC+ EEG headset and that of a carefully selected set of reduced 4-6 channels. The approach can also identify different levels of engagement activities ranging from general monitoring ad hoc and repeated active monitoring activities involving information search, extraction, and memory activities.Keywords: assessment, neurophysiology, monitoring, EEG
Procedia PDF Downloads 7615112 A Witty Relief Ailment Based on the Integration of IoT and Cloud
Authors: Sai Shruthi Sridhar, A. Madhumidha, Kreethika Guru, Priyanka Sekar, Ananthi Malayappan
Abstract:
Numerous changes in technology and its recent development are structuring long withstanding effect to our world, one among them is the emergence of âInternet of Thingsâ (IoT). Similar to Technology world, one industry stands out in everyday lifeâhealthcare. Attention to âquality of health careâ is an increasingly important issue in a global economy and for every individual. As per WHO (World Health Organization) it is estimated to be less than 50% adhere to the medication provided and only about 20% get their medicine on time. Medication adherence is one of the top problems in healthcare which is fixable by use of technology. In recent past, there were minor provisions for elderly and specially-skilled to get motivated and to adhere medicines prescribed. This paper proposes a novel solution that uses IOT based RFID Medication Reminder Solution to provide personal health care services. This employs real time tracking which offer quick counter measures. The proposed solution builds on the recent digital advances in sensor technologies, smart phones and cloud services. This novel solution is easily adoptable and can benefit millions of people with a direct impact on the nationâs health care expenditure with innovative scenarios and pervasive connectivity.Keywords: cloud services, IoT, RFID, sensors
Procedia PDF Downloads 34715111 Factors Affecting Transportation Services in Addis Ababa City
Authors: Yared Yitagesu Tilahun
Abstract:
Every nation, developed or developing, relies on transportation, but Addis Abeba City's transportation service is impacted by a number of variables. The current study's objectives are to determine the factors that influence transportation and gauge consumer satisfaction with such services in Addis Abeba. Customers and employees of Addis Ababa's transportation service authority would be the study's target group. 40 workers of the authority would be counted as part of the 310 000 clients that make up the population of the searcher service. Using a straightforward random selection technique, the researcher only chose 99 customers and 28 staff from this enormous group due to the considerable cost and time involved. Data gathering and analysis options included both quantitative and qualitative approaches. The results of this poll show that young people between the ages of 18 and 25 make up the majority of respondents (51.6%). The majority of employees and customers indicated that they are not satisfied with Addis Ababa's overall transportation system. The Addis Abeba Transportation Authority prioritizes client happiness by providing fair service. The company should have a system in place for managing time, resources, and people effectively. It should also provide employees the opportunity to contribute to client handling policies.Keywords: transportation, customer satisfaction, services, determinants
Procedia PDF Downloads 12515110 Design of Low Latency Multiport Network Router on Chip
Authors: P. G. Kaviya, B. Muthupandian, R. Ganesan
Abstract:
On-chip routers typically have buffers are used input or output ports for temporarily storing packets. The buffers are consuming some router area and power. The multiple queues in parallel as in VC router. While running a traffic trace, not all input ports have incoming packets needed to be transferred. Therefore large numbers of queues are empty and others are busy in the network. So the time consumption should be high for the high traffic. Therefore using a RoShaQ, minimize the buffer area and time The RoShaQ architecture was send the input packets are travel through the shared queues at low traffic. At high load traffic the input packets are bypasses the shared queues. So the power and area consumption was reduced. A parallel cross bar architecture is proposed in this project in order to reduce the power consumption. Also a new adaptive weighted routing algorithm for 8-port router architecture is proposed in order to decrease the delay of the network on chip router. The proposed system is simulated using Modelsim and synthesized using Xilinx Project Navigator.Keywords: buffer, RoShaQ architecture, shared queue, VC router, weighted routing algorithm
Procedia PDF Downloads 54215109 Optimization of Operational Parameters and Design of an Electrochlorination System to Produce Naclo
Authors: Pablo Ignacio HernĂĄndez Arango, Niels Lindemeyer
Abstract:
Chlorine, as Sodium Hypochlorite (NaClO) solution in water, is an effective, worldwide spread, and economical substance to eliminate germs in the water. The disinfection potential of chlorine lies in its ability to degrade the outer surfaces of bacterial cells and viruses. This contribution reports the main parameters of the brine electrolysis for the production of NaClO, which is afterward used for the disinfection of water either for drinking or recreative uses. Herein, the system design was simulated, optimized, build, and tested based on titanium electrodes. The process optimization considers the whole process, from the salt (NaCl) dilution tank in order to maximize its operation time util the electrolysis itself in order to maximize the chlorine production reducing the energy and raw material (salt and water) consumption. One novel idea behind this optimization process is the modification of the flow pattern inside the electrochemical reactors. The increasing turbulence and residence time impact positively the operations figures. The operational parameters, which are defined in this study were compared and benchmarked with the parameters of actual commercial systems in order to validate the pertinency of those results.Keywords: electrolysis, water disinfection, sodium hypochlorite, process optimization
Procedia PDF Downloads 12815108 On the Internal Structure of the âEnigmatic Electronsâ
Authors: Natarajan Tirupattur Srinivasan
Abstract:
Quantum mechanics( QM) and (special) relativity (SR) have indeed revolutionized the very thinking of physicists, and the spectacular successes achieved over a century due to these two theories are mind-boggling. However, there is still a strong disquiet among some physicists. While the mathematical structure of these two theories has been established beyond any doubt, their physical interpretations are still being contested by many. Even after a hundred years of their existence, we cannot answer a very simple question, âWhat is an electronâ? Physicists are struggling even now to come to grips with the different interpretations of quantum mechanics with all their ramifications. However, it is indeed strange that the (special) relativity theory of Einstein enjoys many orders of magnitude of âacceptanceâ, though both theories have their own stocks of weirdness in the results, like time dilation, mass increase with velocity, the collapse of the wave function, quantum jump, tunnelling, etc. Here, in this paper, it would be shown that by postulating an intrinsic internal motion to these enigmatic electrons, one can build a fairly consistent picture of reality, revealing a very simple picture of nature. This is also evidenced by Schrodingerâs âZitterbewegungâ motion, about which so much has been written. This leads to a helical trajectory of electrons when they move in a laboratory frame. It will be shown that the helix is a three-dimensional wave having all the characteristics of our familiar 2D wave. Again, the helix, being a geodesic on an imaginary cylinder, supports âquantizationâ, and its representation is just the complex exponentials matching with the wave function of quantum mechanics. By postulating the instantaneous velocity of the electrons to be always âcâ, the velocity of light, the entire relativity comes alive, and we can interpret the âtime dilationâ, âmass increase with velocityâ, etc., in a very simple way. Thus, this model unifies both QM and SR without the need for a counterintuitive postulate of Einstein about the constancy of the velocity of light for all inertial observers. After all, if the motion of an inertial frame cannot affect the velocity of light, the converse that this constant also cannot affect the events in the frame must be true. But entire relativity is about how âcâ affects time, length, mass, etc., in different frames.Keywords: quantum reconstruction, special theory of relativity, quantum mechanics, zitterbewegung, complex wave function, helix, geodesic, Schrodingerâs wave equations
Procedia PDF Downloads 7315107 The Pathology of Bovine Rotavirus Infection in Calves That Confirmed by Enzyme Linked Immunosorbant Assay, Reverse Transcription Polymerase Chain Reaction and Real-Time RT-PCR
Authors: Shama Ranjan Barua, Tofazzal M. Rakib, Mohammad Alamgir Hossain, Tania Ferdushy, Sharmin Chowdhury
Abstract:
Rotavirus is one of the main etiologies of neonatal diarrhea in bovine calves that causes significant economic loss in Bangladesh. The present study was carried out to investigate the pathology of neonatal enteritis in calves due to bovine rotavirus infection in south-eastern part of Bangladesh. Rotavirus was identified by using ELISA, RT-PCR (Reverse Transcription Polymerase Chain Reaction), real-time RT-PCR. We examined 12 dead calves with history of diarrhea during necropsy. Among 12 dead calves, in gross examination, 6 were found with pathological changes in intestine, 5 calves had congestion of small intestine and rest one had no distinct pathological changes. Intestinal contents and/or faecal samples of all dead calves were collected and examined to confirm the presence of bovine rotavirus A using Enzyme linked immunosorbant assay (ELISA), RT-PCR and real-time RT-PCR. Out 12 samples, 5 (42%) samples revealed presence of bovine rotavirus A in three diagnostic tests. The histopathological changes were found almost exclusively limited in the small intestine. The lesions of rotaviral enteritis ranged from slight to moderate shortening (atrophy) of villi in the jejunum and ileum with necrotic crypts. The villi were blunt and covered by immature epithelial cells. Infected cells, stained with Haematoxylin and Eosin staining method, showed characteristic syncytia and eosinophilc intracytoplasmic inclusion body. The presence of intracytoplasmic inclusion bodies in enterocytes is the indication of viral etiology. The presence of rotavirus in the affected tissues and/or lesions was confirmed by three different immunological and molecular tests. The findings of histopathological changes will be helpful in future diagnosis of rotaviral infection in dead calves.Keywords: calves, diarrhea, pathology, rotavirus
Procedia PDF Downloads 25215106 Measuring the Effectiveness of Response Inhibition regarding to Motor Complexity: Evidence from the Stroop Effect
Authors: GermĂĄn GĂĄlvez-GarcĂa, Marta Lavin, Javiera Peña, Javier Albayay, Claudio Bascour, Jesus Fernandez-Gomez, Alicia PĂ©rez-GĂĄlvez
Abstract:
We studied the effectiveness of response inhibition in movements with different degrees of motor complexity when they were executed in isolation and alternately. Sixteen participants performed the Stroop task which was used as a measure of response inhibition. Participants responded by lifting the index finger and reaching the screen with the same finger. Both actions were performed separately and alternately in different experimental blocks. Repeated measures ANOVAs were used to compare reaction time, movement time, kinematic errors and Movement errors across conditions (experimental block, movement, and congruency). Delta plots were constructed to perform distributional analyses of response inhibition and accuracy rate. The effectiveness of response inhibition did not show difference when the movements were performed in separated blocks. Nevertheless, it showed differences when they were performed alternately in the same experimental block, being more effective for the lifting action. This could be due to a competition of the available resources during a more complex scenario which also demands to adopt some strategy to avoid errors.Keywords: response inhibition, motor complexity, Stroop task, delta plots
Procedia PDF Downloads 39415105 The Greek Theatre in Australia Until 1950
Authors: Papazafeiropoulou Olga
Abstract:
The first Greek expatriates created centers of culture in Australia from the beginning of the 19th century, in the large urban centers of the cities (Sydney, Melbourne, Brisbane, Adelaide, Perth). They created community theater according to their cultural standards, their socio-spiritual progress and development and their relationship with theatrical creation. At the same time, the Greek immigrants of the small towns and, especially of NSW, created their own temples of art, rebuilding theater buildings (theatres and cinemas), many of which are preserved to this day. Hellenism in Australia operated in the field of entertainment, reflecting the currents of the time and the global spread of mechanical developments. The Australian-born young people of the parish, as well as pioneering expatriates joined the theater and cinematographic events of Australia. They mobilized beyond the narrow confines of the parish, gaining recognition and projecting Hellenism to the Australian establishment. G. Paizis (A. Haggard), Dimitrios Ioannidis, Stelios Saligaros, Angela Parselli, Sofia Pergamali, Raoul Kardamatis, Adam Tavlaridis, John Lemonne, Rudy Ricco, Artemis Linou, distinguished themselves by writing their names in the history of Australian theater, as they served consequently the theatrical process, elevating the sentiment of the expatriate during the early years of its settlement in the Australian Commonwealth until 1950.Keywords: greeks, commubity, australia, theatre
Procedia PDF Downloads 6815104 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 13115103 Effect of the Deposition Time of Hydrogenated Nanocrystalline Si Grown on Porous Alumina Film on Glass Substrate by Plasma Processing Chemical Vapor Deposition
Authors: F. Laatar, S. Ktifa, H. Ezzaouia
Abstract:
Plasma Enhanced Chemical Vapor Deposition (PECVD) method is used to deposit hydrogenated nanocrystalline silicon films (nc-Si: H) on Porous Anodic Alumina Films (PAF) on glass substrate at different deposition duration. Influence of the deposition time on the physical properties of nc-Si: H grown on PAF was investigated through an extensive correlation between micro-structural and optical properties of these films. In this paper, we present an extensive study of the morphological, structural and optical properties of these films by Atomic Force Microscopy (AFM), X-Ray Diffraction (XRD) techniques and a UV-Vis-NIR spectrometer. It was found that the changes in DT can modify the films thickness, the surface roughness and eventually improve the optical properties of the composite. Optical properties (optical thicknesses, refractive indexes (n), absorption coefficients (α), extinction coefficients (k), and the values of the optical transitions EG) of this kind of samples were obtained using the data of the transmittance T and reflectance R spectraâs recorded by the UVâVisâNIR spectrometer. We used Cauchy and WempleâDiDomenico models for the analysis of the dispersion of the refractive index and the determination of the optical properties of these films.Keywords: hydragenated nanocrystalline silicon, plasma processing chemical vapor deposition, X-ray diffraction, optical properties
Procedia PDF Downloads 37715102 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural networkâs capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 17515101 Implementation-Oriented Discussion for Historical and Cultural Villagesâ Conservation Planning
Authors: Xing Zhang
Abstract:
Since the State Council of China issued the Regulations on the Conservation of Historical Cultural Towns and Villages in 2008, formulation of conservation planning has been carried out in national, provincial and municipal historical and cultural villages for protection needs, which provides a legal basis for inheritance of historical culture and protection of historical resources. Although the quantity and content of the conservation planning are continually increasing, the implementation and application are still ambiguous. To solve the aforementioned problems, this paper explores methods to enhance the implementation of conservation planning from the perspective of planning formulation. Specifically, the technical framework of "overall objectives planning - sub-objectives planning - zoning guidelines - implementation by stages" is proposed to implement the planning objectives in different classifications and stages. Then combined with details of the Qiqiao historical and cultural village conservation planning project in Ningbo, five sub-objectives are set, which are implemented through the village zoning guidelines. At the same time, the key points and specific projects in the near-term, medium-term and long-term work are clarified, and the spatial planning is transformed into the action plan with time scale. The proposed framework and method provide a reference for the implementation and management of the conservation planning of historical and cultural villages in the future.Keywords: conservation planning, planning by stages, planning implementation, zoning guidelines
Procedia PDF Downloads 24215100 Trend Analysis of Rainfall: A Climate Change Paradigm
Authors: Shyamli Singh, Ishupinder Kaur, Vinod K. Sharma
Abstract:
Climate Change refers to the change in climate for extended period of time. Climate is changing from the past history of earth but anthropogenic activities accelerate this rate of change and which is now being a global issue. Increase in greenhouse gas emissions is causing global warming and climate change related issues at an alarming rate. Increasing temperature results in climate variability across the globe. Changes in rainfall patterns, intensity and extreme events are some of the impacts of climate change. Rainfall variability refers to the degree to which rainfall patterns varies over a region (spatial) or through time period (temporal). Temporal rainfall variability can be directly or indirectly linked to climate change. Such variability in rainfall increases the vulnerability of communities towards climate change. Increasing urbanization and unplanned developmental activities, the air quality is deteriorating. This paper mainly focuses on the rainfall variability due to increasing level of greenhouse gases. Rainfall data of 65 years (1951-2015) of Safdarjung station of Delhi was collected from Indian Meteorological Department and analyzed using Mann-Kendall test for time-series data analysis. Mann-Kendall test is a statistical tool helps in analysis of trend in the given data sets. The slope of the trend can be measured through Senâs slope estimator. Data was analyzed monthly, seasonally and yearly across the period of 65 years. The monthly rainfall data for the said period do not follow any increasing or decreasing trend. Monsoon season shows no increasing trend but here was an increasing trend in the pre-monsoon season. Hence, the actual rainfall differs from the normal trend of the rainfall. Through this analysis, it can be projected that there will be an increase in pre-monsoon rainfall than the actual monsoon season. Pre-monsoon rainfall causes cooling effect and results in drier monsoon season. This will increase the vulnerability of communities towards climate change and also effect related developmental activities.Keywords: greenhouse gases, Mann-Kendall test, rainfall variability, Sen's slope
Procedia PDF Downloads 20815099 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film
Authors: Li Long, Thomas Ortlepp
Abstract:
A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor
Procedia PDF Downloads 14015098 Distributed Acoustic Sensing Signal Model under Static Fiber Conditions
Authors: G. Punithavathy
Abstract:
The research proposes a statistical model for the distributed acoustic sensor interrogation units that broadcast a laser pulse into the fiber optics, where interactions within the fiber determine the localized acoustic energy that causes light reflections known as backscatter. The backscattered signal's amplitude and phase can be calculated using explicit equations. The created model makes amplitude signal spectrum and autocorrelation predictions that are confirmed by experimental findings. Phase signal characteristics that are useful for researching optical time domain reflectometry (OTDR) system sensing applications are provided and examined, showing good agreement with the experiment. The experiment was successfully done with the use of Python coding. In this research, we can analyze the entire distributed acoustic sensing (DAS) component parts separately. This model assumes that the fiber is in a static condition, meaning that there is no external force or vibration applied to the cable, that means no external acoustic disturbances present. The backscattered signal consists of a random noise component, which is caused by the intrinsic imperfections of the fiber, and a coherent component, which is due to the laser pulse interacting with the fiber.Keywords: distributed acoustic sensing, optical fiber devices, optical time domain reflectometry, Rayleigh scattering
Procedia PDF Downloads 70