Search results for: urban simulation environment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15851

Search results for: urban simulation environment

2141 Rural Sanitation in India: Special Context in the State of Odisa

Authors: Monalisha Ghosh, Asit Mohanty

Abstract:

The lack of sanitation increases living costs, decreases spend on education and nutrition, lowers income earning potential, and threatens safety and welfare. This is especially true for rural India. Only 32% of rural households have their own toilets and that less than half of Indian households have a toilet at home. Of the estimated billion people in the world who defecate in the open, more than half reside in rural India. It is empirically established that poor sanitation leads to high infant mortality rate and low income generation in rural India. In India, 1,600 children die every day before reaching their fifth birthday and 24% of girls drop out of school as the lack of basic sanitation. Above all, lack of sanitation is not a symptom of poverty but a major contributing factor. According to census 2011, 67.3% of the rural households in the country still did not have access to sanitation facilities. India’s sanitation deficit leads to losses worth roughly 6% of its gross domestic product (GDP) according to World Bank estimates by raising the disease burden in the country. The dropout rate for girl child is thirty percent in schools in rural areas because of lack of sanitation facilities for girl students. The productivity loss per skilled labors during a year is calculated at Rs.44, 160 in Odisha. The performance of the state of Odisha has not been satisfactory in improving sanitation facilities. The biggest challenge is triggering behavior change in vast section of rural population regarding need to use toilets. Another major challenge is funding and implementation for improvement of sanitation facility. In an environment of constrained economic resources, Public Private Partnership in form of performance based management or maintenance contract will be all the more relevant to improve the sanitation status in rural sector.

Keywords: rural sanitation, infant mortality rate, income, granger causality, pooled OLS method test public private partnership

Procedia PDF Downloads 412
2140 The Influence of Phosphate Fertilizers on Radiological Situation of Cultivated Lands: ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K and ¹³⁷Cs Concentrations in Soil

Authors: Grzegorz Szaciłowski, Marta Konop, Małgorzata Dymecka, Jakub Ośko

Abstract:

In 1996, the European Council Directive 96/29/EURATOM pointed phosphate fertilizers to have a potentially negative influence on the environment from the radiation protection point of view. Fertilizers along with irrigation and crop rotation were the milestones that allowed to increase agricultural productivity. Firstly based on natural materials such as compost, manure, fish processing waste, etc., and since the 19th century created synthetically, fertilizers caused a boom in crop yield and helped to propel global food production, especially after World War II. In this work the concentrations of ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K, and ¹³⁷Cs in selected fertilizers and soil samples were determined. The results were used to calculate the annual addition of natural radionuclides and increment of the external radiation exposure caused by the use of studied fertilizers. Soils intended for different types of crops were sampled in early spring when no vegetation had occurred yet. Analysed fertilizers were those with which the soil was previously fertilized. For gamma radionuclides, a high purity germanium detector GX3520 from Canberra was used. The polonium concentration was determined by radiochemical separation followed by measurement by means of alpha spectrometry. The spectrometer used in this study was equipped with 450 cm² PIPS detector from Canberra. Obtained results showed significant differences in radionuclide composition between phosphate and nitrogenous fertilizers (e.g. the radium equivalent activity for phosphate fertilizer was 207.7 Bq/kg in comparison to <5.6 Bq/kg for nitrogenous fertilizer). The calculated increase of external radiation exposure due to use of phosphate fertilizer ranged between 3.4 and 5.4 nG/h, which represents up to 10% of the polish average outdoor exposure due to terrestrial gamma radiation (45 nGy/h).

Keywords: ²¹⁰Po, alpha spectrometry, exposure, gamma spectrometry, phosphate fertilizer, soil

Procedia PDF Downloads 291
2139 The Impact of Acoustic Performance on Neurodiverse Students in K-12 Learning Spaces

Authors: Michael Lekan-Kehinde, Abimbola Asojo, Bonnie Sanborn

Abstract:

Good acoustic performance has been identified as one of the critical Indoor Environmental Quality (IEQ) factors for student learning and development by the National Research Council. Childhood presents the opportunity for children to develop lifelong skills that will support them throughout their adult lives. Acoustic performance of a space has been identified as a factor that can impact language acquisition, concentration, information retention, and general comfort within the environment. Increasingly, students learn by communication between both teachers and fellow students, making speaking and listening crucial. Neurodiversity - while initially coined to describe individuals with autism spectrum disorder (ASD) - widely describes anyone with a different brain process. As the understanding from cognitive and neurosciences increases, the number of people identified as neurodiversity is nearly 30% of the population. This research looks at guidelines and standard for spaces with good acoustical quality and relates it with the experiences of students with autism spectrum disorder (ASD), their parents, teachers, and educators through a mixed methods approach, including selected case studies interviews, and mixed surveys. The information obtained from these sources is used to determine if selected materials, especially properties relating to sound absorption and reverberation reduction, are equally useful in small, medium sized, and large learning spaces and methodologically approaching. The results describe the potential impact of acoustics on Neurodiverse students, considering factors that determine the complexity of sound in relation to the auditory processing capabilities of ASD students. In conclusion, this research extends the knowledge of how materials selection influences the better development of acoustical environments for autism students.

Keywords: acoustics, autism spectrum disorder (ASD), children, education, learning, learning spaces, materials, neurodiversity, sound

Procedia PDF Downloads 99
2138 Strengths and Weaknesses of Tally, an LCA Tool for Comparative Analysis

Authors: Jacob Seddlemeyer, Tahar Messadi, Hongmei Gu, Mahboobeh Hemmati

Abstract:

The main purpose of this first tier of the study is to quantify and compare the embodied environmental impacts associated with alternative materials applied to Adohi Hall, a residence building at the University of Arkansas campus, Fayetteville, AR. This 200,000square foot building has5 stories builtwith mass timber and is compared to another scenario where the same edifice is built with a steel frame. Based on the defined goal and scope of the project, the materials respectivetothe respective to the two building options are compared in terms of Global Warming Potential (GWP), starting from cradle to the construction site, which includes the material manufacturing stage (raw material extract, process, supply, transport, and manufacture) plus transportation to the site (module A1-A4, based on standard EN 15804 definition). The consumedfossil fuels and emitted CO2 associated with the buildings are the major reason for the environmental impacts of climate change. In this study, GWP is primarily assessed to the exclusion of other environmental factors. The second tier of this work is to evaluate Tally’s performance in the decision-making process through the design phases, as well as determine its strengths and weaknesses. Tally is a Life Cycle Assessment (LCA) tool capable of conducting a cradle-to-grave analysis. As opposed to other software applications, Tally is specifically targeted at buildings LCA. As a peripheral application, this software tool is directly run within the core modeling application platform called Revit. This unique functionality causes Tally to stand out from other similar tools in the building sector LCA analysis. The results of this study also provide insights for making more environmentally efficient decisions in the building environment and help in the move forward to reduce Green House Gases (GHGs) emissions and GWP mitigation.

Keywords: comparison, GWP, LCA, materials, tally

Procedia PDF Downloads 219
2137 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 179
2136 An Approach to Automate the Modeling of Life Cycle Inventory Data: Case Study on Electrical and Electronic Equipment Products

Authors: Axelle Bertrand, Tom Bauer, Carole Charbuillet, Martin Bonte, Marie Voyer, Nicolas Perry

Abstract:

The complexity of Life Cycle Assessment (LCA) can be identified as the ultimate obstacle to massification. Due to these obstacles, the diffusion of eco-design and LCA methods in the manufacturing sectors could be impossible. This article addresses the research question: How to adapt the LCA method to generalize it massively and improve its performance? This paper aims to develop an approach for automating LCA in order to carry out assessments on a massive scale. To answer this, we proceeded in three steps: First, an analysis of the literature to identify existing automation methods. Given the constraints of large-scale manual processing, it was necessary to define a new approach, drawing inspiration from certain methods and combining them with new ideas and improvements. In a second part, our development of automated construction is presented (reconciliation and implementation of data). Finally, the LCA case study of a conduit is presented to demonstrate the feature-based approach offered by the developed tool. A computerized environment supports effective and efficient decision-making related to materials and processes, facilitating the process of data mapping and hence product modeling. This method is also able to complete the LCA process on its own within minutes. Thus, the calculations and the LCA report are automatically generated. The tool developed has shown that automation by code is a viable solution to meet LCA's massification objectives. It has major advantages over the traditional LCA method and overcomes the complexity of LCA. Indeed, the case study demonstrated the time savings associated with this methodology and, therefore, the opportunity to increase the number of LCA reports generated and, therefore, to meet regulatory requirements. Moreover, this approach also presents the potential of the proposed method for a wide range of applications.

Keywords: automation, EEE, life cycle assessment, life cycle inventory, massively

Procedia PDF Downloads 81
2135 Designing Energy Efficient Buildings for Seasonal Climates Using Machine Learning Techniques

Authors: Kishor T. Zingre, Seshadhri Srinivasan

Abstract:

Energy consumption by the building sector is increasing at an alarming rate throughout the world and leading to more building-related CO₂ emissions into the environment. In buildings, the main contributors to energy consumption are heating, ventilation, and air-conditioning (HVAC) systems, lighting, and electrical appliances. It is hypothesised that the energy efficiency in buildings can be achieved by implementing sustainable technologies such as i) enhancing the thermal resistance of fabric materials for reducing heat gain (in hotter climates) and heat loss (in colder climates), ii) enhancing daylight and lighting system, iii) HVAC system and iv) occupant localization. Energy performance of various sustainable technologies is highly dependent on climatic conditions. This paper investigated the use of machine learning techniques for accurate prediction of air-conditioning energy in seasonal climates. The data required to train the machine learning techniques is obtained using the computational simulations performed on a 3-story commercial building using EnergyPlus program plugged-in with OpenStudio and Google SketchUp. The EnergyPlus model was calibrated against experimental measurements of surface temperatures and heat flux prior to employing for the simulations. It has been observed from the simulations that the performance of sustainable fabric materials (for walls, roof, and windows) such as phase change materials, insulation, cool roof, etc. vary with the climate conditions. Various renewable technologies were also used for the building flat roofs in various climates to investigate the potential for electricity generation. It has been observed that the proposed technique overcomes the shortcomings of existing approaches, such as local linearization or over-simplifying assumptions. In addition, the proposed method can be used for real-time estimation of building air-conditioning energy.

Keywords: building energy efficiency, energyplus, machine learning techniques, seasonal climates

Procedia PDF Downloads 108
2134 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 211
2133 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia

Authors: Nino Paresashvili, Nino Abesadze

Abstract:

The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.

Keywords: unemployment, analysis, methods, tendencies, regulation mechanisms

Procedia PDF Downloads 372
2132 Inadequacy and Inefficiency of the Scoping Requirements in the Preparation of Environmental Impact Assessment Reports for Dam and Reservoir Projects in Thailand

Authors: Natsuda Rattamanee

Abstract:

Like other countries, Thailand continually experiences strong protests against dam and reservoir proposals, especially large-scale projects. The protestors are constantly worried about the potential significant adverse impacts of the projects on the environment and society. Although project proponents are required by laws to assess the environmental and social impacts of the dam proposals by making environmental impact assessment (EIA) reports and finding mitigation measures before implementing the plans, the outcomes of the assessments often do not lessen the affected people and public’s concerns about the potential negative effects of the projects. One of the main reasons is that Thailand does not have a proper and efficient law to regulate project proponents when determining the scope of environmental impact assessments. Scoping is the crucial second stage of the preparation of an EIA report. The appropriate scope of assessments will allow EIA studies to focus only on the significant effects of the proposed project on particular resources, areas, and communities. It will offer crucial and sufficient information to the decision-makers and the public. The decision to implement the dam and reservoir projects considered based on the assessments with a proper scoping will eventually be more widely accepted by the public and reduce community opposition. The research work seeks to identify flaws in the current requirements of scoping steps under Thai laws and regulations and proposes recommendations to improve the legal scheme. The paper explores the well-established United States laws and relevant rules regulating how lead agencies determine the scope of their environmental impact assessments and some guidelines concerning scoping published by dominant institutions. Policymakers and legislature will find the results of studies helpful in improving the scoping-step requirements of EIA for dam and reservoir projects and reducing the level of anti-dam protests in Thailand.

Keywords: dam and reservoir, EIA, environmental impact assessment, law, scoping, Thailand

Procedia PDF Downloads 81
2131 Persistent Organochlorine Pesticides (POPs) in Water, Sediment, Fin Fishes (Schilbes mystus and Hemichromis fasciatus) from River Ogun, Lagos, Nigeria

Authors: Edwin O. Clarke, Akintade O. Adeboyejo

Abstract:

Intensive use of pesticides resulted in dispersal of pollutants throughout the globe. This study was carried out to investigate persistent Organochlorine pesticides (POPs) in water, sediment and fin fishes, Schilbes mystus and Hemichromis fasciatus from two different sampling stations along River Ogun between the month of June 2012 and January 2013. The Organochlorine pesticides analyzed include DDT (pp’1,1,1-trichloro-2,2-bis-(4-chlorophenyl) ethane), DDD, DDE (pp1,1-dichloro-2, 2-bis-(4-chlorophenyl) ethylene, HCH (gamma 1,2,3,4,5,6-hexachlorocylohexane, HCB hexachlorobenzene),Dieldrin (1,2,3,4,10,10-hexachloro-6,7-epoxy-1,4,4a,5,6,7,8,8a octahydro- 1,4,5,8 dimethanonaphthalene). The analysis was done using Gas Chromatograph with Electron Capture Detector. In water sample, the result showed that PPDDT, Endrin aldehyde, Endrin ketone concentrations were high in both stations. The mean value of Organochlorine analyzed in water range from Beta BHC (0.50±0.10µg/l) to PP DDT (162.86±0.21µg/l) in Kara sample station and Beta BHC (0.20±0.07µg/l) to Endrin Aldehyde (76.47±0.02µg/l) in Odo-Ogun sample station. The levels of POPs obtained in sediments ranged from 0.40±0.23µg/g (Beta BHC) to 259.90 ± 1.00µg/kg (Endosulfan sulfate) in Kara sample station and 0.64±0.00µg/g (Beta BHC) to 379.77 ±0.15 µg/g (Endosulfan sulfate) in Odo-Ogun sample station. The levels of POPs obtained in fin fish samples ranged from 0.29±0.00µg/g (Delta BHC) to 197.87 ± 0.31µg/g (PP DDT) in Kara sample station and in Odo-Ogun sample station the mean value for fish samples range from 0.29 ± 0.00 µg/g (Delta BHC) to 197.87 ± 0.32 µg/g (PP DDT). The study showed that the accumulation of POPs affect the environment and reduce water quality. The results showed that the concentrations were found to exceed the maximum acceptable concentration of 0.10µg/l value set by the European Union for the protection of freshwater aquatic life and this can be hazardous if the trend is not checked.

Keywords: hazardous, persistent, pesticides, biomes

Procedia PDF Downloads 278
2130 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 214
2129 Wheat Dihaploid and Somaclonal Lines Screening for Resistance to P. nodorum

Authors: Lidia Kowalska, Edward Arseniuk

Abstract:

Glume and leaf blotch is a disease of wheat caused by necrotrophic fungus Parastagonospora nodorum. It is a serious pathogen in many wheat-growing areas throughout the world. Use of resistant cultivars is the most effective and economical means to control the above-mentioned disease. Plant breeders and pathologists have worked intensively to incorporate resistance to the pathogen in new cultivars. Conventional methods of breeding for resistance can be supported by using the biotechnological ones, i.e., somatic embryogenesis and androgenesis. Therefore, an effort was undertaken to compare genetic variation in P. nodorum resistance among winter wheat somaclones, dihaploids and conventional varieties. For the purpose, a population of 16 somaclonal and 4 dihaploid wheat lines from six crosses were used to assess their resistance to P. nodorum under field conditions. Lines were grown in disease-free (fungicide protected) and inoculated micro plots in 2 replications of a split-plot design in a single environment. The plant leaves were inoculated with a mixture of P. nodorum isolates three times. Spore concentrations were adjusted to 4 x 10⁶ of viable spores per one milliliter. The disease severity was rated on a scale, where > 90% – susceptible, < 10% - resistant. Disease ratings of plant leaves showed statistically significant differences among all lines tested. Higher resistance to P. nodorum was observed more often on leaves of somaclonal lines than on dihaploid ones. On average, disease, severity reached 15% on leaves of somaclones and 30% on leaves of dihaploids. Some of the genotypes were showing low leaf infection, e.g. dihaploid D-33 (disease severity 4%) and a somaclone S-1 (disease severity 2%). The results from this study prove that dihaploid and somaclonal variation might be successfully used as an additional source of wheat resistance to the pathogen and it could be recommended to use in commercial breeding programs. The reported results prove that biotechnological methods may effectively be used in breeding for disease resistance of wheat to fungal necrotrophic pathogens.

Keywords: glume and leaf blotch, somaclonal, androgenic variation, wheat, resistance breeding

Procedia PDF Downloads 114
2128 Sustainable Technologies for Decommissioning of Nuclear Facilities

Authors: Ahmed Stifi, Sascha Gentes

Abstract:

The German nuclear industry, while implementing the German policy, believes that the journey towards the green-field, namely phasing out of nuclear energy, should be achieved through green techniques. The most important techniques required for the wide range of decommissioning activities are decontamination techniques, cutting techniques, radioactivity measuring techniques, remote control techniques, techniques for worker and environmental protection and techniques for treating, preconditioning and conditioning nuclear waste. Many decontamination techniques are used for removing contamination from metal, concrete or other surfaces like the scales inside pipes. As the pipeline system is one of the important components of nuclear power plants, the process of decontamination in tubing is of more significance. The development of energy sectors like oil sector, gas sector and nuclear sector, since the middle of 20th century, increased the pipeline industry and the research in the decontamination of tubing in each sector is found to serve each other. The extraction of natural products and material through the pipeline can result in scale formation. These scales can be radioactively contaminated through an accumulation process especially in the petrochemical industry when oil and gas are extracted from the underground reservoir. The radioactivity measured in these scales can be significantly high and pose a great threat to people and the environment. At present, the decontamination process involves using high pressure water jets with or without abrasive material and this technology produces a high amount of secondary waste. In order to overcome it, the research team within Karlsruhe Institute of Technology developed a new sustainable method to carry out the decontamination of tubing without producing any secondary waste. This method is based on vibration technique which removes scales and also does not require any auxiliary materials. The outcome of the research project proves that the vibration technique used for decontamination of tubing is environmental friendly in other words a sustainable technique.

Keywords: sustainable technologies, decontamination, pipeline, nuclear industry

Procedia PDF Downloads 297
2127 The Clash between Environmental and Heritage Laws: An Australian Case Study

Authors: Andrew R. Beatty

Abstract:

The exploitation of Australia’s vast mineral wealth is regulated by a matrix of planning, environment and heritage legislation, and despite the desire for a ‘balance’ between economic, environmental and heritage values, Aboriginal objects and places are often detrimentally impacted by mining approvals. The Australian experience is not novel. There are other cases of clashes between the rights of traditional landowners and businesses seeking to exploit mineral or other resources on or beneath those lands, including in the United States, Canada, and Brazil. How one reconciles the rights of traditional owners with those of resource companies is an ongoing legal problem of general interest. In Australia, planning and environmental approvals for resource projects are ordinarily issued by State or Territory governments. Federal legislation such as the Aboriginal and Torres Strait Islander Heritage Protection Act 1984 (Cth) is intended to act as a safety net when State or Territory legislation is incapable of protecting Indigenous objects or places in the context of approvals for resource projects. This paper will analyse the context and effectiveness of legislation enacted to protect Indigenous heritage in the planning process. In particular, the paper will analyse how the statutory objects of such legislation need to be weighed against the statutory objects of competing legislation designed to facilitate and control resource exploitation. Using a current claim in the Federal Court of Australia for the protection of a culturally significant landscape as a case study, this paper will examine the challenges faced in ascribing value to cultural heritage within the wider context of environmental and planning laws. Our findings will reveal that there is an inherent difficulty in defining and weighing competing economic, environmental and heritage considerations. An alternative framework will be proposed to guide regulators towards making decisions that result in better protection of Indigenous heritage in the context of resource management.

Keywords: environmental law, heritage law, indigenous rights, mining

Procedia PDF Downloads 93
2126 Thermosonic Devulcanization of Waste Ground Rubber Tires by Quaternary Ammonium-Based Ternary Deep Eutectic Solvents and the Effect of α-Hydrogen

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid

Abstract:

Landfills, water contamination, and toxic gas emission are a few impacts faced by the environment due to the increasing number of αof waste rubber tires (WRT). In spite of such concerning issue, only minimal efforts are taken to reclaim or recycle these wastes as their products are generally not-profitable for companies. Unlike the typical reclamation process, devulcanization is a method to selectively cleave sulfidic bonds within vulcanizates to avoid polymeric scissions that compromise elastomer’s mechanical and tensile properties. The process also produces devulcanizates that are re-processable similar to virgin rubber. Often, a devulcanizing agent is needed. In the current study, novel and sustainable ammonium chloride-based ternary deep eutectic solvents (TDES), with a different number of α-hydrogens, were utilised to devulcanize ground rubber tire (GRT) as an effort to implement green chemistry to tackle such issue. 40-mesh GRT were soaked for 1 day with different TDESs and sonicated at 37-80 kHz for 60-120 mins and heated at 100-140oC for 30-90 mins. Devulcanizates were then filtered, dried, and evaluated based on the percentage of by means of Flory-Rehner calculation and swelling index. The result shows that an increasing number of α-Hs increases the degree of devulcanization, and the value achieved was around eighty-percent, thirty percent higher than the typical industrial-autoclave method. Resulting bondages of devulcanizates were also analysed by Fourier transform infrared spectrometer (FTIR), Horikx fitting, and thermogravimetric analyser (TGA). The earlier two confirms only sulfidic scissions were experienced by GRT through the treatment, while the latter proves the absence or negligibility of carbon-chains scission.

Keywords: ammonium, sustainable, deep eutectic solvent, α-hydrogen, waste rubber tire

Procedia PDF Downloads 120
2125 The Relationship Between Car Drivers' Background Information and Risky Events In I- Dreams Project

Authors: Dagim Dessalegn Haile

Abstract:

This study investigated the interaction between the drivers' socio-demographic background information (age, gender, and driving experience) and the risky events score in the i-DREAMS platform. Further, the relationship between the participants' background driving behavior and the i-DREAMS platform behavioral output scores of risky events was also investigated. The i-DREAMS acronym stands for Smart Driver and Road Environment Assessment and Monitoring System. It is a European Union Horizon 2020 funded project consisting of 13 partners, researchers, and industry partners from 8 countries. A total of 25 Belgian car drivers (16 male and nine female) were considered for analysis. Drivers' ages were categorized into ages 18-25, 26-45, 46-65, and 65 and older. Drivers' driving experience was also categorized into four groups: 1-15, 16-30, 31-45, and 46-60 years. Drivers are classified into two clusters based on the recorded score for risky events during phase 1 (baseline) using risky events; acceleration, deceleration, speeding, tailgating, overtaking, and lane discipline. Agglomerative hierarchical clustering using SPSS shows Cluster 1 drivers are safer drivers, and Cluster 2 drivers are identified as risky drivers. The analysis result indicated no significant relationship between age groups, gender, and experience groups except for risky events like acceleration, tailgating, and overtaking in a few phases. This is mainly because the fewer participants create less variability of socio-demographic background groups. Repeated measure ANOVA shows that cluster 2 drivers improved more than cluster 1 drivers for tailgating, lane discipline, and speeding events. A positive relationship between background drivers' behavior and i-DREAMS platform behavioral output scores is observed. It implies that car drivers who in the questionnaire data indicate committing more risky driving behavior demonstrate more risky driver behavior in the i-DREAMS observed driving data.

Keywords: i-dreams, car drivers, socio-demographic background, risky events

Procedia PDF Downloads 64
2124 Adopting Data Science and Citizen Science to Explore the Development of African Indigenous Agricultural Knowledge Platform

Authors: Steven Sam, Ximena Schmidt, Hugh Dickinson, Jens Jensen

Abstract:

The goal of this study is to explore the potential of data science and citizen science approaches to develop an interactive, digital, open infrastructure that pulls together African indigenous agriculture and food systems data from multiple sources, making it accessible and reusable for policy, research and practice in modern food production efforts. The World Bank has recognised that African Indigenous Knowledge (AIK) is innovative and unique among local and subsistent smallholder farmers, and it is central to sustainable food production and enhancing biodiversity and natural resources in many poor, rural societies. AIK refers to tacit knowledge held in different languages, cultures and skills passed down from generation to generation by word of mouth. AIK is a key driver of food production, preservation, and consumption for more than 80% of citizens in Africa, and can therefore assist modern efforts of reducing food insecurity and hunger. However, the documentation and dissemination of AIK remain a big challenge confronting librarians and other information professionals in Africa, and there is a risk of losing AIK owing to urban migration, modernisation, land grabbing, and the emergence of relatively small-scale commercial farming businesses. There is also a clear disconnect between the AIK and scientific knowledge and modern efforts for sustainable food production. The study combines data science and citizen science approaches through active community participation to generate and share AIK for facilitating learning and promoting knowledge that is relevant for policy intervention and sustainable food production through a curated digital platform based on FAIR principles. The study adopts key informant interviews along with participatory photo and video elicitation approach, where farmers are given digital devices (mobile phones) to record and document their every practice involving agriculture, food production, processing, and consumption by traditional means. Data collected are analysed using the UK Science and Technology Facilities Council’s proven methodology of citizen science (Zooniverse) and data science. Outcomes are presented in participatory stakeholder workshops, where the researchers outline plans for creating the platform and developing the knowledge sharing standard framework and copyrights agreement. Overall, the study shows that learning from AIK, by investigating what local communities know and have, can improve understanding of food production and consumption, in particular in times of stress or shocks affecting the food systems and communities. Thus, the platform can be useful for local populations, research, and policy-makers, and it could lead to transformative innovation in the food system, creating a fundamental shift in the way the North supports sustainable, modern food production efforts in Africa.

Keywords: Africa indigenous agriculture knowledge, citizen science, data science, sustainable food production, traditional food system

Procedia PDF Downloads 78
2123 Shaping and Improving the Human Resource Management in Small and Medium Enterprises in Poland

Authors: Małgorzata Smolarek

Abstract:

One of the barriers to the development of small and medium-sized enterprises (SME) are difficulties connected with management of human resources. The first part of article defines the specifics of staff management in small and medium enterprises. The practical part presents results of own studies in the area of diagnosis of the state of the human resources management in small and medium-sized enterprises in Poland. It takes into account its impact on the functioning of SME in a variable environment. This part presents findings of empirical studies, which enabled verification of the hypotheses and formulation of conclusions. The findings presented in this paper were obtained during the implementation of the project entitled 'Tendencies and challenges in strategic managing SME in Silesian Voivodeship.' The aim of the studies was to diagnose the state of strategic management and human resources management taking into account its impact on the functioning of small and medium enterprises operating in Silesian Voivodeship in Poland and to indicate improvement areas of the model under diagnosis. One of the specific objectives of the studies was to diagnose the state of the process of strategic management of human resources and to identify fundamental problems. In this area, the main hypothesis was formulated: The enterprises analysed do not have comprehensive strategies for management of human resources. The survey was conducted by questionnaire. Main Research Results: Human resource management in SMEs is characterized by simplicity of procedures, and the lack of sophisticated tools and its specificity depends on the size of the company. The process of human resources management in SME has to be adjusted to the structure of an organisation, result from its objectives, so that an organisation can fully implement its strategic plans and achieve success and competitive advantage on the market. A guarantee of success is an accurately developed policy of human resources management based on earlier analyses of the existing procedures and possessed human resources.

Keywords: human resources management, human resources policy, personnel strategy, small and medium enterprises

Procedia PDF Downloads 233
2122 The Impact of Client Leadership, Building Information Modelling (BIM) and Integrated Project Delivery (IPD) on Construction Project: A Case Study in UAE

Authors: C. W. F. Che Wan Putra, M. Alshawi, M. S. Al Ahbabi, M. Jabakhanji

Abstract:

The construction industry is a multi-disciplinary and multi-national industry, which has an important role to play within the overall economy of any country. There are major challenges to an improved performance within the industry. Particularly lacking is, the ability to capture the large amounts of information generated during the life-cycle of projects and to make these available, in the right format, so that professionals can then evaluate alternative solutions based on life-cycle analysis. The fragmented nature of the industry is the main reason behind the unavailability and ill utilisation of project information. The lack of adequately engaging clients and managing their requirements contributes adversely to construction budget and schedule overruns. This is a difficult task to achieve, particularly if clients are not continuously and formally involved in the design and construction process, which means that the design intent is left to designers that may not always satisfy clients’ requirements. Client lead is strongly recognised in bringing change through better collaboration between project stakeholders. However, one of the major challenges is that collaboration is operated under conventional procurement methods, which hugely limit the stakeholders’ roles and responsibilities to bring about the required level of collaboration. A research has been conducted with a typical project in the UAE. A qualitative research work was conducted including semi-structured interviews with project partners to discover the real reasons behind this delay. The case study also investigated the real causes of the problems and if they can be adequately addressed by BIM and IPD. Special focus was also placed on the Client leadership and the role the Client can play to eliminate/minimize these problems. It was found that part of the ‘key elements’ from which the problems exist can be attributed to the client leadership and the collaborative environment and BIM.

Keywords: client leadership, building information modelling (BIM), integrated project delivery (IPD), case study

Procedia PDF Downloads 320
2121 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 393
2120 Evaluating Gene-Gene Interaction among Nicotine Dependence Genes on the Risk of Oral Clefts

Authors: Mengying Wang, Dongjing Liu, Holger Schwender, Ping Wang, Hongping Zhu, Tao Wu, Terri H Beaty

Abstract:

Background: Maternal smoking is a recognized risk factor for nonsyndromic cleft lip with or without cleft palate (NSCL/P). It has been reported that the effect of maternal smoking on oral clefts is mediated through genes that influence nicotine dependence. The polymorphisms of cholinergic receptor nicotinic alpha (CHRNA) and beta (CHRNB) subunits genes have previously shown strong associations with nicotine dependence. Here, we attempted to investigate whether the above genes are associated with clefting risk through testing for potential gene-gene (G×G) and gene-environment (G×E) interaction. Methods: We selected 120 markers in 14 genes associated with nicotine dependence to conduct transmission disequilibrium tests among 806 Chinese NSCL/P case-parent trios ascertained in an international consortium which conducted a genome-wide association study (GWAS) of oral clefts. We applied Cordell’s method using “TRIO” package in R to explore G×G as well as G×E interaction involving environmental tobacco smoke (ETS) based on conditional logistic regression model. Results: while no SNP showed significant association with NSCL/P after Bonferroni correction, we found signals for G×G interaction between 10 pairs of SNPs in CHRNA3, CHRNA5, and CHRNB4 (p<10-8), among which the most significant interaction was found between RS3743077 (CHRNA3) and RS11636753 (CHRNB4, p<8.2×10-12). Linkage disequilibrium (LD) analysis revealed only low level of LD between these markers. However, there were no significant results for G×ETS interaction. Conclusion: This study fails to detect association between nicotine dependence genes and NSCL/P, but illustrates the importance of taking into account potential G×G interaction for genetic association analysis in NSCL/P. This study also suggests nicotine dependence genes should be considered as important candidate genes for NSCL/P in future studies.

Keywords: Gene-Gene Interaction, Maternal Smoking, Nicotine Dependence, Non-Syndromic Cleft Lip with or without Cleft Palate

Procedia PDF Downloads 331
2119 Circle Work as a Relational Praxis to Facilitate Collaborative Learning within Higher Education: A Decolonial Pedagogical Framework for Teaching and Learning in the Virtual Classroom

Authors: Jennifer Nutton, Gayle Ployer, Ky Scott, Jenny Morgan

Abstract:

Working in a circle within higher education creates a decolonial space of mutual respect, responsibility, and reciprocity that facilitates collaborative learning and deep connections among learners and instructors. This approach is beyond simply facilitating a group in a circle but opens the door to creating a sacred space connecting each member to the land, to the Indigenous peoples who have taken care of the lands since time immemorial, to one another, and to one’s own positionality. These deep connections not only center human knowledges and relationships but also acknowledges responsibilities to land. Working in a circle as a relational pedagogical praxis also disrupts institutional power dynamics by creating a space of collaborative learning and deep connections in the classroom. Inherent within circle work is to facilitate connections not just academically but emotionally, physically, culturally, and spiritually. Recent literature supports the use of online talking circles, finding that it can offer a more relational and experiential learning environment, which is often absent in the virtual world and has been made more evident and necessary since the pandemic. These deeper experiences of learning and connection, rooted in both knowledge and the land, can then be shared with openness and vulnerability with one another, facilitating growth and change. This process of beginning with the land is critical to ensure we have the grounding to obstruct the ongoing realities of colonialism. The authors, who identify as both Indigenous and non-Indigenous, as both educators and learners, reflect on their teaching and learning experiences in circle. They share a relational pedagogical praxis framework that has been successful in educating future social workers, environmental activists, and leaders in social and human services, health, legal and political fields.

Keywords: circle work, relational pedagogies, decolonization, distance education

Procedia PDF Downloads 74
2118 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 154
2117 As a Little-Known Side a Passionate Statistician: Florence Nightingale

Authors: Gülcan Taşkıran, Ayla Bayık Temel

Abstract:

Background: Florence Nightingale, the modern founder of the nursing, is most famous for her role as a nurse. But not so much known about her contributions as a mathematician and statistician. Aim: In this conceptual article it is aimed to examine Florence Nightingale's statistics education, how she used her passion for statistics and applied statistical data in nursing care and her scientific contributions to statistical science. Design: Literature review method was used in the study. The databases of Istanbul University Library Search Engine, Turkish Medical Directory, Thesis Scanning Center of Higher Education Council, PubMed, Google Scholar, EBSCO Host, Web of Science were scanned to reach the studies. The keywords 'statistics' and 'Florence Nightingale' have been used in Turkish and English while being screened. As a result of the screening, totally 41 studies were examined from the national and international literature. Results: Florence Nightingale has interested in mathematics and statistics at her early ages and has received various training in these subjects. Lessons learned by Nightingale in a cultured family environment, her talent in mathematics and numbers, and her religious beliefs played a crucial role in the direction of the statistics. She was influenced by Quetelet's ideas in the formation of the statistical philosophy and received support from William Farr in her statistical studies. During the Crimean War, she applied statistical knowledge to nursing care, developed many statistical methods and graphics, so that she made revolutionary reforms in the health field. Conclusions: Nightingale's interest in statistics, her broad vision, the statistical ideas fused with religious beliefs, the innovative graphics she has developed and the extraordinary statistical projects that she carried out has been influential on the basis of her professional achievements. Florence Nightingale has also become a model for women in statistics. Today, using and teaching of statistics and research in nursing care practices and education programs continues with the light she gave.

Keywords: Crimean war, Florence Nightingale, nursing, statistics

Procedia PDF Downloads 289
2116 Feasibility Study on Developing and Enhancing of Flood Forecasting and Warning Systems in Thailand

Authors: Sitarrine Thongpussawal, Dasarath Jayasuriya, Thanaroj Woraratprasert, Sakawtree Prajamwong

Abstract:

Thailand grapples with recurrent floods causing substantial repercussions on its economy, society, and environment. In 2021, the economic toll of these floods amounted to an estimated 53,282 million baht, primarily impacting the agricultural sector. The existing flood monitoring system in Thailand suffers from inaccuracies and insufficient information, resulting in delayed warnings and ineffective communication to the public. The Office of the National Water Resources (OWNR) is tasked with developing and integrating data and information systems for efficient water resources management, yet faces challenges in monitoring accuracy, forecasting, and timely warnings. This study endeavors to evaluate the viability of enhancing Thailand's Flood Forecasting and Warning (FFW) systems. Additionally, it aims to formulate a comprehensive work package grounded in international best practices to enhance the country's FFW systems. Employing qualitative research methodologies, the study conducted in-depth interviews and focus groups with pertinent agencies. Data analysis involved techniques like note-taking and document analysis. The study substantiates the feasibility of developing and enhancing FFW systems in Thailand. Implementation of international best practices can augment the precision of flood forecasting and warning systems, empowering local agencies and residents in high-risk areas to prepare proactively, thereby minimizing the adverse impact of floods on lives and property. This research underscores that Thailand can feasibly advance its FFW systems by adopting international best practices, enhancing accuracy, and improving preparedness. Consequently, the study enriches the theoretical understanding of flood forecasting and warning systems and furnishes valuable recommendations for their enhancement in Thailand.

Keywords: flooding, forecasting, warning, monitoring, communication, Thailand

Procedia PDF Downloads 52
2115 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems

Procedia PDF Downloads 81
2114 Synthesis and Characterization of Cobalt Oxide and Cu-Doped Cobalt Oxide as Photocatalyst for Model Dye Degradation

Authors: Vrinda P. S. Borker

Abstract:

Major water pollutants are dyes from effluents of industries. Different methods have been tried to degrade or treat the effluent before it is left to the environment. In order to understand the degradation process and later apply it to effluents, solar degradation study of methylene blue (MB) and methyl red (MR), the model dyes was carried out in the presence of photo-catalysts, the oxides of cobalt oxide Co₃O₄, and copper doped cobalt oxides (Co₀.₉Cu₀.₁)₃O₄ and (Co₀.₉₅Cu₀.₀₅)₃O₄. They were prepared from oxalate complex and hydrazinated oxalate complex of cobalt as well as mix metals, copper, and cobalt. The complexes were synthesized and characterized by FTIR. Complexes were decomposed to form oxides and were characterized by XRD. They were found to be monophasic. Solar degradation of MR and MB was carried out in presence of these oxides in acidic and basic medium. Degradation was faster in alkaline medium in the presence of Co₃O₄ obtained from hydrazinated oxalate. Doping of nanomaterial oxides modifies their characteristics. Doped cobalt oxides are found to photo-decolourise MR in alkaline media efficiently. In the absence of photocatalyst, solar degradation of alkaline MR does not occur. In acidic medium, MR is minimally decolorized even in the presence of photocatalysts. The industrial textile effluent contains chemicals like NaCl and Na₂CO₃ along with the unabsorbed dye. It is reported that these two chemicals hamper the degradation of dye. The chemicals like K₂S₂O₈ and H₂O₂ are reported to enhance degradation. The solar degradation study of MB in presence of photocatalyst (Co₀.₉Cu₀.₁)₃O₄ and these four chemicals reveals that presence of K₂S₂O₈ and H₂O₂ enhances degradation. It proves that H₂O₂ generates hydroxyl ions required for degradation of dye and the sulphate anion radical being strong oxidant attacks dye molecules leading to its fragmentation rapidly. Thus addition of K₂S₂O₈ and H₂O₂ during solar degradation in presence of (Co₀.₉Cu₀.₁)₃O₄ helps to break the organic moiety efficiently.

Keywords: cobalt oxides, Cu-doped cobalt oxides, H₂O₂ in dye degradation, photo-catalyst, solar dye degradation

Procedia PDF Downloads 168
2113 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa

Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu

Abstract:

After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.

Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration

Procedia PDF Downloads 75
2112 Alumni Experiences of How Their Undergraduate Medical Education Instilled and Fostered a Commitment to Community-Based Work in Later Life: A Sequential Exploratory Mixed-Methods Study

Authors: Harini Aiyer, Kalyani Premkumar

Abstract:

Health professionals are the key players who can help achieve the goals of population health equity. Social accountability (SA) of health professionals emphasizes their role in addressing issues of equity in the population they serve. Therefore, health professional education must focus on instilling SA in health professionals. There is limited literature offering a longitudinal perspective of how students sustain the practice of SA in later life. This project aims to identify the drivers of social accountability among physicians. This study employed an exploratory mixed methods design (QUAL-> Quant) to explore alumni perceptions and experiences. The qualitative data, collected via 20 in-depth, semi-structured interviews, provided an understanding of the perceptions of the alumni regarding the influence of their undergraduate learning environment on their SA. This was followed by a quantitative portion -a questionnaire designed from the themes identified from the qualitative data. Emerging themes from the study highlighted community-centered education and a focus on social and preventative medicine in both curricular and non-curricular facilitators of SA among physicians. Curricular components included opportunities to engage with the community, such as roadside clinics, community-orientation programs, and postings at a secondary hospital. Other facilitators that emerged were the faculty leading by example, a subsidized fee structure, and a system that prepared students for practice in rural and remote areas. The study offers a fresh perspective and dimension on how SA is addressed by medical schools. The findings may be adapted by medical schools to understand how their own SA initiatives have been sustained among physicians over the long run.

Keywords: community-based work, global health, health education, medical education, providing health in remote areas, social accountability

Procedia PDF Downloads 74