Search results for: physical and chemical parameters
124 The Procedural Sedation Checklist Manifesto, Emergency Department, Jersey General Hospital
Authors: Jerome Dalphinis, Vishal Patel
Abstract:
The Bailiwick of Jersey is an island British crown dependency situated off the coast of France. Jersey General Hospital’s emergency department sees approximately 40,000 patients a year. It’s outside the NHS, with secondary care being free at the point of care. Sedation is a continuum which extends from a normal conscious level to being fully unresponsive. Procedural sedation produces a minimally depressed level of consciousness in which the patient retains the ability to maintain an airway, and they respond appropriately to physical stimulation. The goals of it are to improve patient comfort and tolerance of the procedure and alleviate associated anxiety. Indications can be stratified by acuity, emergency (cardioversion for life-threatening dysrhythmia), and urgency (joint reduction). In the emergency department, this is most often achieved using a combination of opioids and benzodiazepines. Some departments also use ketamine to produce dissociative sedation, a cataleptic state of profound analgesia and amnesia. The response to pharmacological agents is highly individual, and the drugs used occasionally have unpredictable pharmacokinetics and pharmacodynamics, which can always result in progression between levels of sedation irrespective of the intention. Therefore, practitioners must be able to ‘rescue’ patients from deeper sedation. These practitioners need to be senior clinicians with advanced airway skills (AAS) training. It can lead to adverse effects such as dangerous hypoxia and unintended loss of consciousness if incorrectly undertaken; studies by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have reported avoidable deaths. The Royal College of Emergency Medicine, UK (RCEM) released an updated ‘Safe Sedation of Adults in the Emergency Department’ guidance in 2017 detailing a series of standards for staff competencies, and the required environment and equipment, which are required for each target sedation depth. The emergency department in Jersey undertook audit research in 2018 to assess their current practice. It showed gaps in clinical competency, the need for uniform care, and improved documentation. This spurred the development of a checklist incorporating the above RCEM standards, including contraindication for procedural sedation and difficult airway assessment. This was approved following discussion with the relevant heads of departments and the patient safety directorates. Following this, a second audit research was carried out in 2019 with 17 completed checklists (11 relocation of joints, 6 cardioversions). Data was obtained from looking at the controlled resuscitation drugs book containing documented use of ketamine, alfentanil, and fentanyl. TrakCare, which is the patient electronic record system, was then referenced to obtain further information. The results showed dramatic improvement compared to 2018, and they have been subdivided into six categories; pre-procedure assessment recording of significant medical history and ASA grade (2 fold increase), informed consent (100% documentation), pre-oxygenation (88%), staff (90% were AAS practitioners) and monitoring (92% use of non-invasive blood pressure, pulse oximetry, capnography, and cardiac rhythm monitoring) during procedure, and discharge instructions including the documented return of normal vitals and consciousness (82%). This procedural sedation checklist is a safe intervention that identifies pertinent information about the patient and provides a standardised checklist for the delivery of gold standard of care.Keywords: advanced airway skills, checklist, procedural sedation, resuscitation
Procedia PDF Downloads 117123 Medical Examiner Collection of Comprehensive, Objective Medical Evidence for Conducted Electrical Weapons and Their Temporal Relationship to Sudden Arrest
Authors: Michael Brave, Mark Kroll, Steven Karch, Charles Wetli, Michael Graham, Sebastian Kunz, Dorin Panescu
Abstract:
Background: Conducted electrical weapons (CEW) are now used in 107 countries and are a common law enforcement less-lethal force practice in the United Kingdom (UK), United States of America (USA), Canada, Australia, New Zealand, and others. Use of these devices is rarely temporally associated with the occurrence of sudden arrest-related deaths (ARD). Because such deaths are uncommon, few Medical Examiners (MEs) ever encounter one, and even fewer offices have established comprehensive investigative protocols. Without sufficient scientific data, the role, if any, played by a CEW in a given case is largely supplanted by conjecture often defaulting to a CEW-induced fatal cardiac arrhythmia. In addition to the difficulty in investigating individual deaths, the lack of information also detrimentally affects being able to define and evaluate the ARD cohort generally. More comprehensive, better information leads to better interpretation in individual cases and also to better research. The purpose of this presentation is to provide MEs with a comprehensive evidence-based checklist to assist in the assessment of CEW-ARD cases. Methods: PUBMED and Sociology/Criminology data bases were queried to find all medical, scientific, electrical, modeling, engineering, and sociology/criminology peer-reviewed literature for mentions of CEW or synonymous terms. Each paper was then individually reviewed to identify those that discussed possible bioelectrical mechanisms relating CEW to ARD. A Naranjo-type pharmacovigilance algorithm was also employed, when relevant, to identify and quantify possible direct CEW electrical myocardial stimulation. Additionally, CEW operational manuals and training materials were reviewed to allow incorporation of CEW-specific technical parameters. Results: Total relevant PUBMED citations of CEWs were less than 250, and reports of death extremely rare. Much relevant information was available from Sociology/Criminology data bases. Once the relevant published papers were identified, and reviewed, we compiled an annotated checklist of data that we consider critical to a thorough CEW-involved ARD investigation. Conclusion: We have developed an evidenced-based checklist that can be used by MEs and their staffs to assist them in identifying, collecting, documenting, maintaining, and objectively analyzing the role, if any, played by a CEW in any specific case of sudden death temporally associated with the use of a CEW. Even in cases where the collected information is deemed by the ME as insufficient for formulating an opinion or diagnosis to a reasonable degree of medical certainty, information collected as per the checklist will often be adequate for other stakeholders to use as a basis for informed decisions. Having reviewed the appropriate materials in a significant number of cases careful examination of the heart and brain is likely adequate. Channelopathy testing should be considered in some cases, however it may be considered cost prohibitive (aprox $3000). Law enforcement agencies may want to consider establishing a reserve fund to help manage such rare cases. The expense may stay the enormous costs associated with incident-precipitated litigation.Keywords: ARD, CEW, police, TASER
Procedia PDF Downloads 346122 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow
Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden
Abstract:
The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines
Procedia PDF Downloads 86121 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine
Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska
Abstract:
The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: radial engine, ignition system, non-uniformity, combustion process
Procedia PDF Downloads 366120 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 184119 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 390118 Nephrotoxicity and Hepatotoxicity Induced by Chronic Aluminium Exposure in Rats: Impact of Nutrients Combination versus Social Isolation and Protein Malnutrition
Authors: Azza A. Ali, Doaa M. Abd El-Latif, Amany M. Gad, Yasser M. A. Elnahas, Karema Abu-Elfotuh
Abstract:
Background: Exposure to Aluminium (Al) has been increased recently. It is found in food products, food additives, drinking water, cosmetics and medicines. Chronic consumption of Al causes oxidative stress and has been implicated in several chronic disorders. Liver is considered as the major site for detoxification while kidney is involved in the elimination of toxic substances and is a target organ of metal toxicity. Social isolation (SI) or protein malnutrition (PM) also causes oxidative stress and has negative impact on Al-induced nephrotoxicity as well as hepatotoxicity. Coenzyme Q10 (CoQ10) is a powerful intracellular antioxidant with mitochondrial membrane stabilizing ability while wheat grass is a natural product with antioxidant, anti-inflammatory and different protective activities, cocoa is also potent antioxidants and can protect against many diseases. They provide different degrees of protection from the impact of oxidative stress. Objective: To study the impact of social isolation together with Protein malnutrition on nephro- and hepato-toxicity induced by chronic Al exposure in rats as well as to investigate the postulated protection using a combination of Co Q10, wheat grass and cocoa. Methods: Eight groups of rats were used; four served as protected groups and four as un-protected. Each of them received daily for five weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except one group served as control. Al-toxicity model groups were divided to Al-toxicity alone, SI- associated PM (10% casein diet) and Al- associated SI&PM groups. Protection was induced by oral co-administration of CoQ10 (200mg/kg), wheat grass (100mg/kg) and cocoa powder (24mg/kg) combination together with Al. Biochemical changes in total bilirubin, lipids, cholesterol, triglycerides, glucose, proteins, creatinine and urea as well as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), lactate deshydrogenase (LDH) were measured in serum of all groups. Specimens of kidney and liver were used for assessment of oxidative parameters (MDA, SOD, TAC, NO), inflammatory mediators (TNF-α, IL-6β, nuclear factor kappa B (NF-κB), Caspase-3) and DNA fragmentation in addition to evaluation of histopathological changes. Results: SI together with PM severely enhanced nephro- and hepato-toxicity induced by chronic Al exposure. Co Q10, wheat grass and cocoa combination showed clear protection against hazards of Al exposure either alone or when associated with SI&PM. Their protection were indicated by the significant decrease in Al-induced elevations in total bilirubin, lipids, cholesterol, triglycerides, glucose, creatinine and urea levels as well as ALT, AST, ALP, LDH. Liver and kidney of the treated groups also showed significant decrease in MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3 and DNA fragmentation, together with significant increase in total proteins, SOD and TAC. Biochemical results were confirmed by the histopathological examinations. Conclusion: SI together with PM represents a risk factor in enhancing nephro- and hepato-toxicity induced by Al in rats. CoQ10, wheat grass and cocoa combination provide clear protection against nephro- and hepatotoxicity as well as the consequent degenerations induced by chronic Al-exposure even when associated with the risk of SI together with PM.Keywords: aluminum, nephrotoxicity, hepatotoxicity, isolation and protein malnutrition, coenzyme Q10, wheatgrass, cocoa, nutrients combinations
Procedia PDF Downloads 247117 EEG and DC-Potential Level Сhanges in the Elderly
Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova
Abstract:
In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.Keywords: brain, DC-potential level, EEG, elderly people
Procedia PDF Downloads 484116 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities
Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho
Abstract:
In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.Keywords: on-site stormwater detention, source control, stormwater, urban drainage
Procedia PDF Downloads 180115 Circular Tool and Dynamic Approach to Grow the Entrepreneurship of Macroeconomic Metabolism
Authors: Maria Areias, Diogo Simões, Ana Figueiredo, Anishur Rahman, Filipa Figueiredo, João Nunes
Abstract:
It is expected that close to 7 billion people will live in urban areas by 2050. In order to improve the sustainability of the territories and its transition towards circular economy, it’s necessary to understand its metabolism and promote and guide the entrepreneurship answer. The study of a macroeconomic metabolism involves the quantification of the inputs, outputs and storage of energy, water, materials and wastes for an urban region. This quantification and analysis representing one opportunity for the promotion of green entrepreneurship. There are several methods to assess the environmental impacts of an urban territory, such as human and environmental risk assessment (HERA), life cycle assessment (LCA), ecological footprint assessment (EF), material flow analysis (MFA), physical input-output table (PIOT), ecological network analysis (ENA), multicriteria decision analysis (MCDA) among others. However, no consensus exists about which of those assessment methods are best to analyze the sustainability of these complex systems. Taking into account the weaknesses and needs identified, the CiiM - Circular Innovation Inter-Municipality project aims to define an uniform and globally accepted methodology through the integration of various methodologies and dynamic approaches to increase the efficiency of macroeconomic metabolisms and promoting entrepreneurship in a circular economy. The pilot territory considered in CiiM project has a total area of 969,428 ha, comprising a total of 897,256 inhabitants (about 41% of the population of the Center Region). The main economic activities in the pilot territory, which contribute to a gross domestic product of 14.4 billion euros, are: social support activities for the elderly; construction of buildings; road transport of goods, retailing in supermarkets and hypermarkets; mass production of other garments; inpatient health facilities; and the manufacture of other components and accessories for motor vehicles. The region's business network is mostly constituted of micro and small companies (similar to the Central Region of Portugal), with a total of 53,708 companies identified in the CIM Region of Coimbra (39 large companies), 28,146 in the CIM Viseu Dão Lafões (22 large companies) and 24,953 in CIM Beiras and Serra da Estrela (13 large companies). For the construction of the database was taking into account data available at the National Institute of Statistics (INE), General Directorate of Energy and Geology (DGEG), Eurostat, Pordata, Strategy and Planning Office (GEP), Portuguese Environment Agency (APA), Commission for Coordination and Regional Development (CCDR) and Inter-municipal Community (CIM), as well as dedicated databases. In addition to the collection of statistical data, it was necessary to identify and characterize the different stakeholder groups in the pilot territory that are relevant to the different metabolism components under analysis. The CIIM project also adds the potential of a Geographic Information System (GIS) so that it is be possible to obtain geospatial results of the territorial metabolisms (rural and urban) of the pilot region. This platform will be a powerful visualization tool of flows of products/services that occur within the region and will support the stakeholders, improving their circular performance and identifying new business ideas and symbiotic partnerships.Keywords: circular economy tools, life cycle assessment macroeconomic metabolism, multicriteria decision analysis, decision support tools, circular entrepreneurship, industrial and regional symbiosis
Procedia PDF Downloads 101114 Geochemistry and Tectonic Framework of Malani Igneous Suite and Their Effect on Groundwater Quality of Tosham, India
Authors: Naresh Kumar, Savita Kumari, Naresh Kochhar
Abstract:
The objective of the study was to assess the role of mineralogy and subsurface structure on water quality of Tosham, Malani Igneous Suite (MIS), Western Rajasthan, India. MIS is the largest (55,000 km2) A-type, anorogenic and high heat producing acid magmatism in the peninsular India and owes its origin to hot spot tectonics. Apart from agricultural and industrial wastes, geogenic activities cause fluctuations in quality parameters of water resources. Twenty water samples (20) selected from Tosham and surrounding areas were analyzed for As, Pb, B, Al, Zn, Fe, Ni using Inductive coupled plasma emission and F by Ion Chromatography. The concentration of As, Pb, B, Ni and F was above the stipulated level specified by BIS (Bureau of Indian Standards IS-10500, 2012). The concentration of As and Pb in surrounding areas of Tosham ranged from 1.2 to 4.1 mg/l and from 0.59 to 0.9 mg/l respectively which is higher than limits of 0.05mg/l (As) and 0.01 mg/l (Pb). Excess trace metal accumulation in water is toxic to humans and adversely affects the central nervous system, kidneys, gastrointestinal tract, skin and cause mental confusion. Groundwater quality is defined by nature of rock formation, mineral water reaction, physiography, soils, environment, recharge and discharge conditions of the area. Fluoride content in groundwater is due to the solubility of fluoride-bearing minerals like fluorite, cryolite, topaz, and mica, etc. Tosham is comprised of quartz mica schist, quartzite, schorl, tuff, quartz porphyry and associated granites, thus, fluoride is leached out and dissolved in groundwater. In the study area, Ni concentration ranged from 0.07 to 0.5 mg/l (permissible limit 0.02 mg/l). The primary source of nickel in drinking water is leached out nickel from ore-bearing rocks. Higher concentration of As is found in some igneous rocks specifically containing minerals as arsenopyrite (AsFeS), realgar (AsS) and orpiment (As2S3). MIS consists of granite (hypersolvus and subsolvus), rhyolite, dacite, trachyte, andesite, pyroclasts, basalt, gabbro and dolerite which increased the trace elements concentration in groundwater. Nakora, a part of MIS rocks has high concentration of trace and rare earth elements (Ni, Rb, Pb, Sr, Y, Zr, Th, U, La, Ce, Nd, Eu and Yb) which percolates the Ni and Pb to groundwater by weathering, contacts and joints/fractures in rocks. Additionally, geological setting of MIS also causes dissolution of trace elements in water resources beneath the surface. NE–SW tectonic lineament, radial pattern of dykes and volcanic vent at Nakora created a way for leaching of these elements to groundwater. Rain water quality might be altered by major minerals constituents of host Tosham rocks during its percolation through the rock fracture, joints before becoming the integral part of groundwater aquifer. The weathering process like hydration, hydrolysis and solution might be the cause of change in water chemistry of particular area. These studies suggest that geological relation of soil-water horizon with MIS rocks via mineralogical variations, structures and tectonic setting affects the water quality of the studied area.Keywords: geochemistry, groundwater, malani igneous suite, tosham
Procedia PDF Downloads 219113 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 198112 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output
Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera
Abstract:
Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas
Procedia PDF Downloads 162111 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 149110 Peculiarities of Absorption near the Edge of the Fundamental Band of Irradiated InAs-InP Solid Solutions
Authors: Nodar Kekelidze, David Kekelidze, Elza Khutsishvili, Bela Kvirkvelia
Abstract:
The semiconductor devices are irreplaceable elements for investigations in Space (artificial Earth satellite, interplanetary space craft, probes, rockets) and for investigation of elementary particles on accelerators, for atomic power stations, nuclear reactors, robots operating on heavily radiation contaminated territories (Chernobyl, Fukushima). Unfortunately, the most important parameters of semiconductors dramatically worsen under irradiation. So creation of radiation-resistant semiconductor materials for opto and microelectronic devices is actual problem, as well as investigation of complicated processes developed in irradiated solid states. Homogeneous single crystals of InP-InAs solid solutions were grown with zone melting method. There has been studied the dependence of the optical absorption coefficient vs photon energy near fundamental absorption edge. This dependence changes dramatically with irradiation. The experiments were performed on InP, InAs and InP-InAs solid solutions before and after irradiation with electrons and fast neutrons. The investigations of optical properties were carried out on infrared spectrophotometer in temperature range of 10K-300K and 1mkm-50mkm spectral area. Radiation fluencies of fast neutrons was equal to 2·1018neutron/cm2 and electrons with 3MeV, 50MeV up to fluxes of 6·1017electron/cm2. Under irradiation, there has been revealed the exponential type of the dependence of the optical absorption coefficient vs photon energy with energy deficiency. The indicated phenomenon takes place at high and low temperatures as well at impurity different concentration and practically in all cases of irradiation by various energy electrons and fast neutrons. We have developed the common mechanism of this phenomenon for unirradiated materials and implemented the quantitative calculations of distinctive parameter; this is in a satisfactory agreement with experimental data. For the irradiated crystals picture get complicated. In the work, the corresponding analysis is carried out. It has been shown, that in the case of InP, irradiated with electrons (Ф=1·1017el/cm2), the curve of optical absorption is shifted to lower energies. This is caused by appearance of the tails of density of states in forbidden band due to local fluctuations of ionized impurity (defect) concentration. Situation is more complicated in the case of InAs and for solid solutions with composition near to InAs when besides noticeable phenomenon there takes place Burstein effect caused by increase of electrons concentration as a result of irradiation. We have shown, that in certain conditions it is possible the prevalence of Burstein effect. This causes the opposite effect: the shift of the optical absorption edge to higher energies. So in given solid solutions there take place two different opposite directed processes. By selection of solid solutions composition and doping impurity we obtained such InP-InAs, solid solution in which under radiation mutual compensation of optical absorption curves displacement occurs. Obtained result let create on the base of InP-InAs, solid solution radiation-resistant optical materials. Conclusion: It was established the nature of optical absorption near fundamental edge in semiconductor materials and it was created radiation-resistant optical material.Keywords: InAs-InP, electrons concentration, irradiation, solid solutions
Procedia PDF Downloads 201109 Closing down the Loop Holes: How North Korea and Other Bad Actors Manipulate Global Trade in Their Favor
Authors: Leo Byrne, Neil Watts
Abstract:
In the complex and evolving landscape of global trade, maritime sanctions emerge as a critical tool wielded by the international community to curb illegal activities and alter the behavior of non-compliant states and entities. These sanctions, designed to restrict or prohibit trade by sea with sanctioned jurisdictions, entities, or individuals, face continuous challenges due to the sophisticated evasion tactics employed by countries like North Korea. As the Democratic People's Republic of Korea (DPRK) diverts significant resources to circumvent these measures, understanding the nuances of their methodologies becomes imperative for maintaining the integrity of global trade systems. The DPRK, one of the most sanctioned nations globally, has developed an intricate network to facilitate its trade in illicit goods, ensuring the flow of revenue from designated activities continues unabated. Given its geographic and economic conditions, North Korea predominantly relies on maritime routes, utilizing foreign ports to route its illicit trade. This reliance on the sea is exploited through various sophisticated methods, including the use of front companies, falsification of documentation, commingling of bulk cargos, and physical alterations to vessels. These tactics enable the DPRK to navigate through the gaps in regulatory frameworks and lax oversight, effectively undermining international sanctions regimes Maritime sanctions carry significant implications for global trade, imposing heightened risks in the maritime domain. The deceptive practices employed not only by the DPRK but also by other high-risk jurisdictions, necessitate a comprehensive understanding of UN targeted sanctions. For stakeholders in the maritime sector—including maritime authorities, vessel owners, shipping companies, flag registries, and financial institutions serving the shipping industry—awareness and compliance are paramount. Violations can lead to severe consequences, including reputational damage, sanctions, hefty fines, and even imprisonment. To mitigate risks associated with these deceptive practices, it is crucial for maritime sector stakeholders to employ rigorous due diligence and regulatory compliance screening measures. Effective sanctions compliance serves as a protective shield against legal, financial, and reputational risks, preventing exploitation by international bad actors. This requires not only a deep understanding of the sanctions landscape but also the capability to identify and manage risks through informed decision-making and proactive risk management practices. As the DPRK and other sanctioned entities continue to evolve their sanctions evasion tactics, the international community must enhance its collective efforts to demystify and counter these practices. By leveraging more stringent compliance measures, stakeholders can safeguard against the illicit use of the maritime domain, reinforcing the effectiveness of maritime sanctions as a tool for global security. This paper seeks to dissect North Korea's adaptive strategies in the face of maritime sanctions. By examining up-to-date, geographically, and temporally relevant case studies, it aims to shed light on the primary nodes through which Pyongyang evades sanctions and smuggles goods via third-party ports. The goal is to propose multi-level interaction strategies, ranging from governmental interventions to localized enforcement mechanisms, to counteract these evasion tactics.Keywords: maritime, maritime sanctions, international sanctions, compliance, risk
Procedia PDF Downloads 70108 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 44107 Stabilizing Additively Manufactured Superalloys at High Temperatures
Authors: Keivan Davami, Michael Munther, Lloyd Hackel
Abstract:
The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.Keywords: laser shock peening, mechanical properties, indentation, high temperature stability
Procedia PDF Downloads 149106 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface
Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis
Abstract:
In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.Keywords: polymorphic, locomotion, sinusoidal curves, parametric
Procedia PDF Downloads 105105 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks
Authors: Christina Kirsch, Adam Hatzigiannis
Abstract:
Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis
Procedia PDF Downloads 121104 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria
Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo
Abstract:
This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria
Procedia PDF Downloads 512103 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 175102 Regulatory Governance as a De-Parliamentarization Process: A Contextual Approach to Global Constitutionalism and Its Effects on New Arab Legislatures
Authors: Abderrahim El Maslouhi
Abstract:
The paper aims to analyze an often-overlooked dimension of global constitutionalism, which is the rise of the regulatory state and its impact on parliamentary dynamics in transition regimes. In contrast to Majone’s technocratic vision of convergence towards a single regulatory system based on competence and efficiency, national transpositions of regulatory governance and, in general, the relationship to global standards primarily depend upon a number of distinctive parameters. These include policy formation process, speed of change, depth of parliamentary tradition and greater or lesser vulnerability to the normative conditionality of donors, interstate groupings and transnational regulatory bodies. Based on a comparison between three post-Arab Spring countries -Morocco, Tunisia, and Egypt, whose constitutions have undergone substantive review in the period 2011-2014- and some European Union state members, the paper intends, first, to assess the degree of permeability to global constitutionalism in different contexts. A noteworthy divide emerges from this comparison. Whereas European constitutions still seem impervious to the lexicon of global constitutionalism, the influence of the latter is obvious in the recently drafted constitutions in Morocco, Tunisia, and Egypt. This is evidenced by their reference to notions such as ‘governance’, ‘regulators’, ‘accountability’, ‘transparency’, ‘civil society’, and ‘participatory democracy’. Second, the study will provide a contextual account of internal and external rationales underlying the constitutionalization of regulatory governance in the cases examined. Unlike European constitutionalism, where parliamentarism and the tradition of representative government function as a structural mechanism that moderates the de-parliamentarization effect induced by global constitutionalism, Arab constitutional transitions have led to a paradoxical situation; contrary to the public demands for further parliamentarization, the 2011 constitution-makers have opted for a de-parliamentarization pattern. This is particularly reflected in the procedures established by constitutions and regular legislation, to handle the interaction between lawmakers and regulatory bodies. Once the ‘constitutional’ and ‘independent’ nature of these agencies is formally endorsed, the birth of these ‘fourth power’ entities, which are neither elected nor directly responsible to elected officials, will raise the question of their accountability. Third, the paper shows that, even in the three selected countries, the de-parliamentarization intensity is significantly variable. By contrast to the radical stance of the Moroccan and Egyptian constituents who have shown greater concern to shield regulatory bodies from legislatures’ scrutiny, the Tunisian case indicates a certain tendency to provide lawmakers with some essential control instruments (e. g. exclusive appointment power, adversarial discussion of regulators’ annual reports, dismissal power, later held unconstitutional). In sum, the comparison reveals that the transposition of the regulatory state model and, more generally, sensitivity to the legal implications of global conditionality essentially relies on the evolution of real-world power relations at both national and international levels.Keywords: Arab legislatures, de-parliamentarization, global constitutionalism, normative conditionality, regulatory state
Procedia PDF Downloads 138101 Surveillance of Artemisinin Resistance Markers and Their Impact on Treatment Outcomes in Malaria Patients in an Endemic Area of South-Western Nigeria
Authors: Abiodun Amusan, Olugbenga Akinola, Kazeem Akano, María Hernández-Castañeda, Jenna Dick, Akintunde Sowunmi, Geoffrey Hart, Grace Gbotosho
Abstract:
Introduction: Artemisinin-based Combination Therapy (ACTs) is the cornerstone malaria treatment option in most malaria-endemic countries. Unfortunately, the malaria control effort is constantly being threatened by resistance of Plasmodium falciparum to ACTs. The recent evidence of artemisinin resistance in East Africa and its possibility of spreading to other African regions portends an imminent health catastrophe. This study aimed at evaluating the occurrence, prevalence, and influence of artemisinin-resistance markers on treatment outcomes in Ibadan before and after post-adoption of artemisinin combination therapy (ACTs) in Nigeria in 2005. Method: The study involved day zero dry blood spot (DBS) obtained from malaria patients during retrospective (2000-2005) and prospective (2021) studies. A cohort in the prospective study received oral dihydroartemisinin-piperaquine and underwent a 42-day follow-up to observe treatment outcomes. Genomic DNA was extracted from the DBS samples using a QIAamp blood extraction kit. Fragments of P. falciparum kelch13 (Pfkelch13), P. falciparum coronin (Pfcoronin), P. falciparum multidrug resistance 2 (PfMDR2), and P. falciparum chloroquine resistance transporter (PfCRT) genes were amplified and sequenced on a sanger sequencing platform to identify artemisinin resistance-associated mutations. Mutations were identified by aligning sequenced data with reference sequences obtained from the National Center for Biotechnology Information. Data were analyzed using descriptive statistics and student t-tests. Results: Mean parasite clearance time (PCT) and fever clearance time (FCT) were 2.1 ± 0.6 days (95% CI: 1.97-2.24) and 1.3 ± 0.7 days (95% CI: 1.1-1.6) respectively. Four mutations, K189T [34/53(64.2%)], R255K [2/53(3.8%)], K189N [1/53(1.9%)] and N217H [1/53(1.9%)] were identified within the N-terminal (Coiled-coil containing) domain of Pfkelch13. No artemisinin resistance-associated mutation usually found within the β-propeller domain of the Pfkelch13 gene was found in these analyzed samples. However, K189T and R255K mutations showed a significant correlation with longer parasite clearance time in the patients (P<0.002). The observed Pfkelch13 gene changes did not influence the baseline mean parasitemia (P = 0.44). P76S [17/100 (17%)] and V62M [1/100 (1%)] changes were identified in the Pfcoronin gene fragment without any influence on the parasitological parameters. No change was observed in the PfMDR2 gene, while no artemisinin resistance-associated mutation was found in the PfCRT gene. Furthermore, a sample each in the retrospective study contained the Pfkelch13 K189T and Pfcoronin P76S mutations. Conclusion: The study revealed absence of genetic-based evidence of artemisinin resistance in the study population at the time of study. The high frequency of K189T Pfkelch13 mutation and its correlation with increased parasite clearance time in this study may depict geographical variation of resistance mediators and imminent artemisinin resistance, respectively. The study also revealed an inherent potential of parasites to harbour drug-resistant genotypes before the introduction of ACTs in Nigeria.Keywords: artemisinin resistance, plasmodium falciparum, Pfkelch13 mutations, Pfcoronin
Procedia PDF Downloads 49100 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)
Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram
Abstract:
Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition
Procedia PDF Downloads 24699 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening
Authors: Partha Saha, Uttam Kumar Banerjee
Abstract:
Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries
Procedia PDF Downloads 25898 Satellite Connectivity for Sustainable Mobility
Authors: Roberta Mugellesi Dow
Abstract:
As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.Keywords: sustainability, connectivity, mobility, satellites
Procedia PDF Downloads 13397 Exploring the Ethics and Impact of Slum Tourism in Kenya: A Critical Examination on the Ethical Implications, Legalities and Beneficiaries of This Trade and Long-Term Implications to the Slum Communities
Authors: Joanne Ndirangu
Abstract:
Delving into the intricate landscape of slum tourism in Kenya, this study critically evaluates its ethical implications, legal frameworks, and beneficiaries. By examining the complex interplay between tourism operators, visitors, and slum residents, it seeks to uncover the long-term consequences for the communities involved. Through an exploration of ethical considerations, legal parameters, and the distribution of benefits, this examination aims to shed light on the broader socio-economic impacts of slum tourism in Kenya, particularly on the lives of those residing in these marginalized communities. Assessing the ethical considerations surrounding slum tourism in Kenya, including the potential exploitation of residents and cultural sensitivities and examine the legal frameworks governing slum tourism in Kenya and evaluate their effectiveness in protecting the rights and well-being of slum dwellers. Identifying the primary beneficiaries of slum tourism in Kenya, including tour operators, local businesses, and residents, and analysing the distribution of economic benefits. Exploring the long-term socio-economic impacts of slum tourism on the lives of residents, including changes in living conditions, access to resources, and community development. Understanding the motivations and perceptions of tourists participating in slum tourism in Kenya and assess their role in shaping the industry's dynamics and investigate the potential for sustainable and responsible forms of slum tourism that prioritize community empowerment, cultural exchange, and mutual respect. Providing recommendations for policymakers, tourism stakeholders, and community organizations to promote ethical and sustainable practices in slum tourism in Kenya. The main contributions of researching slum tourism in Kenya would include; Ethical Awareness: By critically examining the ethical implications of slum tourism, the research can raise awareness among tourists, operators, and policymakers about the potential exploitation of marginalized communities. Beneficiary Analysis: By identifying the primary beneficiaries of slum tourism, the research can inform discussions on fair distribution of economic benefits and potential strategies for ensuring that local communities derive meaningful advantages from tourism activities. Socio-Economic Understanding: By exploring the long-term socio-economic impacts of slum tourism, the research can deepen understanding of how tourism activities affect the lives of slum residents, potentially informing policies and initiatives aimed at improving living conditions and promoting community development. Tourist Perspectives: Understanding the motivations and perceptions of tourists participating in slum tourism can provide valuable insights into consumer behaviour and preferences, informing the development of responsible tourism practices and marketing strategies. Promotion of Responsible Tourism: By providing recommendations for promoting ethical and sustainable practices in slum tourism, the research can contribute to the development of guidelines and initiatives aimed at fostering responsible tourism and minimizing negative impacts on host communities. Overall, the research can contribute to a more comprehensive understanding of slum tourism in Kenya and its broader implications, while also offering practical recommendations for promoting ethical and sustainable tourism practices.Keywords: slum tourism, dark tourism, ethical tourism, responsible tourism
Procedia PDF Downloads 6896 Case Study Hyperbaric Oxygen Therapy for Idiopathic Sudden Sensorineural Hearing Loss
Authors: Magdy I. A. Alshourbagi
Abstract:
Background: The National Institute for Deafness and Communication Disorders defines idiopathic sudden sensorineural hearing loss as the idiopathic loss of hearing of at least 30 dB across 3 contiguous frequencies occurring within 3 days.The most common clinical presentation involves an individual experiencing a sudden unilateral hearing loss, tinnitus, a sensation of aural fullness and vertigo. The etiologies and pathologies of ISSNHL remain unclear. Several pathophysiological mechanisms have been described including: vascular occlusion, viral infections, labyrinthine membrane breaks, immune associated disease, abnormal cochlear stress response, trauma, abnormal tissue growth, toxins, ototoxic drugs and cochlear membrane damage. The rationale for the use of hyperbaric oxygen to treat ISSHL is supported by an understanding of the high metabolism and paucity of vascularity to the cochlea. The cochlea and the structures within it require a high oxygen supply. The direct vascular supply, particularly to the organ of Corti, is minimal. Tissue oxygenation to the structures within the cochlea occurs via oxygen diffusion from cochlear capillary networks into the perilymph and the cortilymph. . The perilymph is the primary oxygen source for these intracochlear structures. Unfortunately, perilymph oxygen tension is decreased significantly in patients with ISSHL. To achieve a consistent rise of perilymph oxygen content, the arterial-perilymphatic oxygen concentration difference must be extremely high. This can be restored with hyperbaric oxygen therapy. Subject and Methods: A 37 year old man was presented at the clinic with a five days history of muffled hearing and tinnitus of the right ear. Symptoms were sudden onset, with no associated pain, dizziness or otorrhea and no past history of hearing problems or medical illness. Family history was negative. Physical examination was normal. Otologic examination revealed normal tympanic membranes bilaterally, with no evidence of cerumen or middle ear effusion. Tuning fork examination showed positive Rinne test bilaterally but with lateralization of Weber test to the left side, indicating right ear sensorineural hearing loss. Audiometric analysis confirmed sensorineural hearing loss across all frequencies of about 70- dB in the right ear. Routine lab work were all within normal limits. Clinical diagnosis of idiopathic sudden sensorineural hearing loss of the right ear was made and the patient began a medical treatment (corticosteroid, vasodilator and HBO therapy). The recommended treatment profile consists of 100% O2 at 2.5 atmospheres absolute for 60 minutes daily (six days per week) for 40 treatments .The optimal number of HBOT treatments will vary, depending on the severity and duration of symptomatology and the response to treatment. Results: As HBOT is not yet a standard for idiopathic sudden sensorineural hearing loss, it was introduced to this patient as an adjuvant therapy. The HBOT program was scheduled for 40 sessions, we used a 12-seat multi place chamber for the HBOT, which was started at day seven after the hearing loss onset. After the tenth session of HBOT, improvement of both hearing (by audiogram) and tinnitus was obtained in the affected ear (right). Conclusions: In conclusion, HBOT may be used for idiopathic sudden sensorineural hearing loss as an adjuvant therapy. It may promote oxygenation to the inner ear apparatus and revive hearing ability. Patients who fail to respond to oral and intratympanic steroids may benefit from this treatment. Further investigation is warranted, including animal studies to understand the molecular and histopathological aspects of HBOT and randomized control clinical studies.Keywords: idiopathic sudden sensorineural hearing loss (issnhl), hyperbaric oxygen therapy (hbot), the decibel (db), oxygen (o2)
Procedia PDF Downloads 43195 Effect of the Incorporation of Modified Starch on the Physicochemical Properties and Consumer Acceptance of Puff Pastry
Authors: Alejandra Castillo-Arias, Santiago Amézquita-Murcia, Golber Carvajal-Lavi, Carlos M. Zuluaga-Domínguez
Abstract:
The intricate relationship between health and nutrition has driven the food industry to seek healthier and more sustainable alternatives. A key strategy currently employed is the reduction of saturated fats and the incorporation of ingredients that align with new consumer trends. Modified starch, a polysaccharide widely used in baking, also serves as a functional ingredient to boost dietary fiber content. However, its use in puff pastry remains challenging due to the technological difficulties in achieving a buttery pastry with the necessary strength to create thin, flaky layers. This study explored the potential of incorporating modified starch into puff pastry formulations. To evaluate the physicochemical properties of wheat flour mixed with modified starch, five different flour samples were prepared: T1, T2, T3, and T4, containing 10g, 20g, 30g, and 40g of modified starch per 100 g mixture, respectively, alongside a control sample (C) with no added starch. The analysis focused on various physicochemical indices, including the Water Absorption Index (WAI), Water Solubility Index (WSI), Swelling Power (SP), and Water Retention Capacity (WRC). The puff pastry was further characterized by color measurement and sensory analysis. For the preparation of the puff pastry dough, the flour, modified starch, and salt were mixed, followed by the addition of water until a homogenous dough was achieved. The margarine was later incorporated into the dough, which was folded and rolled multiple times to create the characteristic layers of puff pastry. The dough was then cut into equal pieces, baked at 170°C, and allowed to cool. The results indicated that the addition of modified starch did not significantly alter the specific volume or texture of the puff pastries, as reflected by the stable WAI and SP values across the samples. However, the WRC increased with higher starch content, highlighting the hydrophilic nature of the modified starch, which necessitated additional water during dough preparation. Color analysis revealed significant variations in the L* (lightness) and a* (red-green) parameters, with no consistent relationship between the modified starch treatments and the control. However, the b* (yellow-blue) parameter showed a strong correlation across most samples, except for treatment T3. Thus, modified starch affected the a* component of the CIELAB color spectrum, influencing the reddish hue of the puff pastries. Variations in baking time due to increased water content in the dough likely contributed to differences in lightness among the samples. Sensory analysis revealed that consumers preferred the sample with a 20% starch substitution (T2), which was rated similarly to the control in terms of texture. However, treatment T3 exhibited unusual behavior in texture analysis, and the color analysis showed that treatment T1 most closely resembled the control, indicating that starch addition is most noticeable to consumers in the visual aspect of the product. In conclusion, while the modified starch successfully maintained the desired texture and internal structure of puff pastry, its impact on water retention and color requires careful consideration in product formulation. This study underscores the importance of balancing product quality with consumer expectations when incorporating modified starches in baked goods.Keywords: consumer preferences, modified starch, physicochemical properties, puff pastry
Procedia PDF Downloads 26