Search results for: DDoS distribution flooding intrusion layer
754 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks
Authors: Lexi Li, Vanessa H. K. Pang
Abstract:
This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus
Procedia PDF Downloads 141753 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.Keywords: childhood, KTK test, physical education, psychomotor competence
Procedia PDF Downloads 341752 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 151751 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 171750 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 138749 Effect of Fertilization and Combined Inoculation with Azospirillum brasilense and Pseudomonas fluorescens on Rhizosphere Microbial Communities of Avena sativa (Oats) and Secale Cereale (Rye) Grown as Cover Crops
Authors: Jhovana Silvia Escobar Ortega, Ines Eugenia Garcia De Salamone
Abstract:
Cover crops are an agri-technological alternative to improve all properties of soils. Cover crops such as oats and rye could be used to reduce erosion and favor system sustainability when they are grown in the same agricultural cycle of the soybean crop. This crop is very profitable but its low contribution of easily decomposable residues, due to its low C/N ratio, leaves the soil exposed to erosive action and raises the need to reduce its monoculture. Furthermore, inoculation with the plant growth promoting rhizobacteria contributes to the implementation, development and production of several cereal crops. However, there is little information on its effects on forage crops which are often used as cover crops to improve soil quality. In order to evaluate the effect of combined inoculation with Azospirillum brasilense and Pseudomonas fluorescens on rhizosphere microbial communities, field experiments were conducted in the west of Buenos Aires province, Argentina, with a split-split plot randomized complete block factorial design with three replicates. The factors were: type of cover crop, inoculation and fertilization. In the main plot two levels of fertilization 0 and 7 40-0-5 (NPKS) were established at sowing. Rye (Secale cereale cultivar Quehué) and oats (Avena sativa var Aurora.) were sown in the subplots. In the sub-subplots two inoculation treatments are applied without and with application of a combined inoculant with A. brasilense and P. fluorescens. Due to the growth of cover crops has to be stopped usually with the herbicide glyphosate, rhizosphere soil of 0-20 and 20-40 cm layers was sampled at three sampling times which were: before glyphosate application (BG), a month after glyphosate application (AG) and at soybean harvest (SH). Community level of physiological profiles (CLPP) and Shannon index of microbial diversity (H) were obtained by multivariate analysis of Principal Components. Also, the most probable number (MPN) of nitrifiers and cellulolytics were determined using selective liquid media for each functional group. The CLPP of rhizosphere microbial communities showed significant differences between sampling times. There was not interaction between sampling times and both, types of cover crops and inoculation. Rhizosphere microbial communities of samples obtained BG had different CLPP with respect to the samples obtained in the sampling times AG and SH. Fertilizer and depth of sampling also caused changes in the CLPP. The H diversity index of rhizosphere microbial communities of rye in the sampling time BG were higher than those associated with oats. The MPN of both microbial functional types was lower in the deeper layer since these microorganisms are mostly aerobic. The MPN of nitrifiers decreased in rhizosphere of both cover crops only AG. At the sampling time BG, the NMP of both microbial types were larger than those obtained for AG and SH. This may mean that the glyphosate application could cause fairly permanent changes in these microbial communities which can be considered bio-indicators of soil quality. Inoculation and fertilizer inputs could be included to improve management of these cover crops because they can have a significant positive effect on the sustainability of the agro-ecosystem.Keywords: community level of physiological profiles, microbial diversity, plant growth promoting rhizobacteria, rhizosphere microbial communities, soil quality, system sustainability
Procedia PDF Downloads 404748 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model
Authors: Shreya Srivastava, Sagnik Dey
Abstract:
Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART
Procedia PDF Downloads 51747 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 231746 Explosive Clad Metals for Geothermal Energy Recovery
Authors: Heather Mroz
Abstract:
Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.Keywords: clad metal, explosion welding, separator material, well casing material, piping material
Procedia PDF Downloads 153745 Optimizing Electric Vehicle Charging Networks with Dynamic Pricing and Demand Elasticity
Authors: Chiao-Yi Chen, Dung-Ying Lin
Abstract:
With the growing awareness of environmental protection and the implementation of government carbon reduction policies, the number of electric vehicles (EVs) has rapidly increased, leading to a surge in charging demand and imposing significant challenges on the existing power grid’s capacity. Traditional urban power grid planning has not adequately accounted for the additional load generated by EV charging, which often strains the infrastructure. This study aims to optimize grid operation and load management by dynamically adjusting EV charging prices based on real-time electricity supply and demand, leveraging consumer demand elasticity to enhance system efficiency. This study uniquely addresses the intricate interplay between urban traffic patterns and power grid dynamics in the context of electric vehicle (EV) adoption. By integrating Hsinchu City's road network with the IEEE 33-bus system, the research creates a comprehensive model that captures both the spatial and temporal aspects of EV charging demand. This approach allows for a nuanced analysis of how traffic flow directly influences the load distribution across the power grid. The strategic placement of charging stations at key nodes within the IEEE 33-bus system, informed by actual road traffic data, enables a realistic simulation of the dynamic relationship between vehicle movement and energy consumption. This integration of transportation and energy systems provides a holistic view of the challenges and opportunities in urban EV infrastructure planning, highlighting the critical need for solutions that can adapt to the ever-changing interplay between traffic patterns and grid capacity. The proposed dynamic pricing strategy effectively reduces peak charging loads, enhances the operational efficiency of charging stations, and maximizes operator profits, all while ensuring grid stability. These findings provide practical insights and a valuable framework for optimizing EV charging infrastructure and policies in future smart cities, contributing to more resilient and sustainable urban energy systems.Keywords: dynamic pricing, demand elasticity, EV charging, grid load balancing, optimization
Procedia PDF Downloads 17744 A Method for Evaluating Gender Equity of Cycling from Rawls Justice Perspective
Authors: Zahra Hamidi
Abstract:
Promoting cycling, as an affordable environmentally friendly mode of transport to replace private car use has been central to sustainable transport policies. Cycling is faster than walking and combined with public transport has the potential to extend the opportunities that people can access. In other words, cycling, besides direct positive health impacts, can improve people mobility and ultimately their quality of life. Transport literature well supports the close relationship between mobility, quality of life, and, well being. At the same time inequity in the distribution of access and mobility has been associated with the key aspects of injustice and social exclusion. The pattern of social exclusion and inequality in access are also often related to population characteristics such as age, gender, income, health, and ethnic background. Therefore, while investing in transport infrastructure it is important to consider the equity of provided access for different population groups. This paper proposes a method to evaluate the equity of cycling in a city from Rawls egalitarian perspective. Since this perspective is concerned with the difference between individuals and social groups, this method combines accessibility measures and Theil index of inequality that allows capturing the inequalities ‘within’ and ‘between’ groups. The paper specifically focuses on two population characteristics as gender and ethnic background. Following Rawls equity principles, this paper measures accessibility by bikes to a selection of urban activities that can be linked to the concept of the social primary goods. Moreover, as growing number of cities around the world have launched bike-sharing systems (BSS) this paper incorporates both private and public bikes networks in the estimation of accessibility levels. Additionally, the typology of bike lanes (separated from or shared with roads), the presence of a bike sharing system in the network, as well as bike facilities (e.g. parking racks) have been included in the developed accessibility measures. Application of this proposed method to a real case study, the city of Malmö, Sweden, shows its effectiveness and efficiency. Although the accessibility levels were estimated only based on gender and ethnic background characteristics of the population, the author suggests that the analysis can be applied to other contexts and further developed using other properties, such as age, income, or health.Keywords: accessibility, cycling, equity, gender
Procedia PDF Downloads 403743 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 86742 Establishment of Diagnostic Reference Levels for Computed Tomography Examination at the University of Ghana Medical Centre
Authors: Shirazu Issahaku, Isaac Kwesi Acquah, Simon Mensah Amoh, George Nunoo
Abstract:
Introduction: Diagnostic Reference Levels are important indicators for monitoring and optimizing protocol and procedure in medical imaging between facilities and equipment. This helps to evaluate whether, in routine clinical conditions, the median value obtained for a representative group of patients within an agreed range from a specified procedure is unusually high or low for that procedure. This study aimed to propose Diagnostic Reference Levels for Computed Tomography examination of the most common routine examination of the head, chest and abdominal pelvis regions at the University of Ghana Medical Centre. Methods: The Diagnostic Reference Levels were determined based on the investigation of the most common routine examinations, including head Computed Tomography examination with and without contrast, abdominopelvic Computed Tomography examination with and without contrast, and chest Computed Tomography examination without contrast. The study was based on two dose indicators: the volumetric Computed Tomography Dose Index and Dose-Length Product. Results: The estimated median distribution for head Computed Tomography with contrast for volumetric-Computed Tomography dose index and Dose-Length Product were 38.33 mGy and 829.35 mGy.cm, while without contrast, were 38.90 mGy and 860.90 mGy.cm respectively. For an abdominopelvic Computed Tomography examination with contrast, the estimated volumetric-Computed Tomography dose index and Dose-Length Product values were 40.19 mGy and 2096.60 mGy.cm. In the absence of contrast, the calculated values were 14.65 mGy and 800.40 mGy.cm, respectively. Additionally, for chest Computed Tomography examination, the estimated values were 12.75 mGy and 423.95 mGy.cm for volumetric-Computed Tomography dose index and Dose-Length Product, respectively. These median values represent the proposed diagnostic reference values of the head, chest, and abdominal pelvis regions. Conclusions: The proposed Diagnostic Reference Level is comparable to the recommended International Atomic Energy Agency and International Commission Radiation Protection Publication 135 and other regional published data by the European Commission and Regional National Diagnostic Reference Level in Africa. These reference levels will serve as benchmarks to guide clinicians in optimizing radiation dose levels while ensuring accurate diagnostic image quality at the facility.Keywords: diagnostic reference levels, computed tomography dose index, computed tomography, radiation exposure, dose-length product, radiation protection
Procedia PDF Downloads 43741 The Effect of Balance Training on Stable and Unstable Surfaces under Cognitive Dual-Task Condition on the Two Directions of Body Sway, Functional Balance and Fear of Fall in Non-Fallers Older Adults
Authors: Elham Azimzadeh, Fahimeh Khorshidi, Alireza Farsi
Abstract:
Balance impairment and fear of falling in older adults may reduce their quality of life. Reactive balance training could improve rapid postural responses and fall prevention in the elderly during daily tasks. Performing postural training and simultaneously cognitive dual tasks could be similar to the daily circumstances. Purpose: This study aimed to determine the effect of balance training on stable and unstable surfaces under dual cognitive task conditions on postural control and fear of falling in the elderly. Methods: Thirty non-fallers of older adults (65-75 years) were randomly assigned to two training groups: stable-surface (n=10), unstable-surface (n=10), or a control group (n=10). The intervention groups underwent six weeks of balance training either on a stable (balance board) or an unstable (wobble board) surface while performing a cognitive dual task. The control group received no balance intervention. COP displacements in the anterioposterior (AP) and mediolateral (ML) directions using a computerized balance board, functional balance using TUG, and fear of falling using FES-I were measured in all participants before and after the interventions. Summary of Results: Mixed ANOVA (3 groups * 2 times) with repeated measures and post hoc test showed a significant improvement in both intervention groups in AP index (F= 11/652, P= 0/0002) and functional balance (F= 9/961, P= 0/0001). However, the unstable surface training group had more improvement. However, the fear of falling significantly improved after training on an unstable surface (p= 0/035). All groups had no significant improvement in the ML index (p= 0/817). In the present study, there was an improvement in the AP index after balance training. Conclusion: Unstable surface training may reduce reaction time in posterior ankle muscle activity. Furthermore, focusing attention on cognitive tasks can lead to maintaining balance unconsciously. Most of the daily activities need attention distribution among several activities. So, balance training concurrent to a dual cognitive task is challenging and more similar to the real world. According to the specificity of the training principle, it may improve functional independence and fall prevention in the elderly.Keywords: cognitive dual task, elderly, fear of falling, postural control, unstable surface
Procedia PDF Downloads 62740 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 74739 Seasonal Variations, Environmental Parameters, and Standing Crop Assessment of Benthic Foraminifera in Western Bahrain, Arabian Gulf
Authors: Muhammad Arslan, Michael A. Kaminski, Bassam S. Tawabini, Fabrizio Frontalini
Abstract:
We conducted a survey of living benthic foraminifera in a relatively unpolluted site of Bahrain in the Arabian Gulf, with the aim of determining the seasonal variability in their populations, as well as various environmental parameters that affect their distribution. The maximum standing crop was observed during winter, with highest population of rotaliids, followed by a peak in miliolids. The highest population is attributed to an increasing number juveniles observed along the depth transect. A strong correlation between sediment grain size and the foraminiferal population indicates that juveniles were most abundant on coarser sandy substrate and less abundant on fine substrate. In spring, the total living population decreased, and lowest values are observed in the summer. The population started to increase again in the autumn with highest juveniles/adult ratios. Moreover, results of relative abundance and species consistency show that Ammonia is found to be consistent from the shallowest to the deepest station, whereas miliolids start appearing in the deeper stations. The average numbers of Peneroplis and Elphidium also increases along the depth transect. Environmental characterization reveals that although the site is subjected to eutrophication caused by nitrates and sulfates, pollution caused by hydrocarbons and heavy metals is not significant. The assessment of 63 heavy metals showed that none of the metals had concentrations that exceed internationally accepted norms [the devised level of Effect Range-Low], with the exception of strontium. The lack of a significant environmental effect of heavy metals is confirmed by a Foraminiferal Deformities Index value of less than 2%. Likewise, no hydrocarbon contamination was detected in the water or sediment samples. Lastly, observations of cytoplasmic streaming and pseudopodial activity in Petri dishes suggest that the foraminiferal population is not stressed. We conclude that the site in Bahrain is not yet adversely affected by human development, and therefore can provide baseline information for future comparison and assessment of foraminiferal assemblages in contaminated zones of the Arabian Gulf.Keywords: Arabian Gulf, benthic foraminifera, standing crop, Western Bahrain
Procedia PDF Downloads 641738 Inter-Complex Dependence of Production Technique and Preforms Construction on the Failure Pattern of Multilayer Homo-Polymer Composites
Authors: Ashraf Nawaz Khan, R. Alagirusamy, Apurba Das, Puneet Mahajan
Abstract:
The thermoplastic-based fibre composites are acquiring a market sector of conventional as well as thermoset composites. However, replacing the thermoset with a thermoplastic composite has never been an easy task. The inherent high viscosity of thermoplastic resin reveals poor interface properties. In this work, a homo-polymer towpreg is produced through an electrostatic powder spray coating methodology. The produced flexible towpreg offers a low melt-flow distance during the consolidation of the laminate. The reduced melt-flow distance demonstrates a homogeneous fibre/matrix distribution (and low void content) on consolidation. The composite laminate has been fabricated with two manufacturing techniques such as conventional film stack (FS) and powder-coated (PC) technique. This helps in understanding the distinct response of produced laminates on applying load since the laminates produced through the two techniques are comprised of the same constituent fibre and matrix (constant fibre volume fraction). The changed behaviour is observed mainly due to the different fibre/matrix configurations within the laminate. The interface adhesion influences the load transfer between the fibre and matrix. Therefore, it influences the elastic, plastic, and failure patterns of the laminates. Moreover, the effect of preform geometries (plain weave and satin weave structure) are also studied for corresponding composite laminates in terms of various mechanical properties. The fracture analysis is carried out to study the effect of resin at the interlacement points through micro-CT analysis. The PC laminate reveals a considerably small matrix-rich and deficient zone in comparison to the FS laminate. The different load tensile, shear, fracture toughness, and drop weight impact test) is applied to the laminates, and corresponding damage behaviour is analysed in the successive stage of failure. The PC composite has shown superior mechanical properties in comparison to the FS composite. The damage that occurs in the laminate is captured through the SEM analysis to identify the prominent mode of failure, such as matrix cracking, fibre breakage, delamination, debonding, and other phenomena.Keywords: composite, damage, fibre, manufacturing
Procedia PDF Downloads 136737 Spatial Analysis as a Tool to Assess Risk Management in Peru
Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado
Abstract:
A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis
Procedia PDF Downloads 183736 Using Corpora in Semantic Studies of English Adjectives
Authors: Oxana Lukoshus
Abstract:
The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies
Procedia PDF Downloads 313735 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point
Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee
Abstract:
Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis
Procedia PDF Downloads 341734 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 148733 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)
Authors: Hamad Almebayedh, Chuxia Lin, Yu wang
Abstract:
The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site
Procedia PDF Downloads 86732 Efficacy of Single-Dose Azithromycin Therapy for the Treatment of Chlamydia trachomatis in Patients Evaluated for Child Sexual Abuse in an Urban Health Center 2006-16
Authors: Trenton Hubbard, Kenneth Soyemi, Emily Siffermann
Abstract:
Introduction: According to the American Academy of Pediatrics (AAP) there are different weight-based recommendations for the treatment of Chlamydia trachomatis (CT) in patients who are being evaluated for sexual assault. Current AAP Red Book guidelines recommend that uncomplicated C. trachomatis anogenital infection in prepubertal patients weighing less than =<45 kg be treated with oral erythromycin 50 mg/kg/day QID for 14 days with no alternative therapies, and for patients whose weight => 45 kg are Azithromycin 1 gm PO once. Our study objective was to determine the efficacy of single-dose Azithromycin therapy for the treatment of Chlamydia trachomatis in patients weighing less than 50 kg who were evaluated for child sexual abuse in an urban setting. Methods: We conducted a retrospective chart review of historical medical records (paper and electronic) patients weighing less than 50 kg who were evaluated for child sexual abuse and subsequently treated for C. trachomatis infection with Azithromycin (20 mg/kg PO once up to a maximum 1 gm) and received a Test of Cure (TOC) from 2006-2016. Qualitative variables were expressed as percentages. Quantitative variables were expressed as mean values (+/- standard deviation [SD]) if they followed a normal distribution or as median values (interquartile range[IQR]) if they did not. Wilcoxson two-sample test was used to compare means of Azithromycin Dose, mg/kg, and TOC timing between treatment responders and non-responders. Results: We reviewed records of 34 patients, average age (SD) was 5.4 (2.0) years, 33 (97%) were treated for CT and 1(3%) for both GC and CT. 25 (74%) were females. Urine PCR was the most commonly used test at evaluation and as TOC with 13 (38%) patients completing both tests. The average (SD) dose of Azithromycin at treatment was 470 (136) mg and average (SD) mg/kg dose of 20 (1.9) mg/kg for all patients. Median (IQR) timing for TOC testing was 19 (14-26) days. Of the 33 with complete data 25 (74%) had a negative TOC. When compared with treatment non-responders (TOC failures), treatment responders received higher doses (average dose (SD) received 495 (139) vs 401(110), P 0.06)); similar average (SD) weight base dosing received (20.8(2.0) vs 19.7 (1.5), P 0.15)), and earlier average (SD)TOC test timing (18.8 (5.6) vs 32 (28.6) P 0.02)). Conclusion: Azithromycin dosing appears to be efficacious in the treatment of CT post sexual assault as majority of patients responded. Although treatment responders and non-responders received similar weight based doses, there is need for additional studies to understand variances and predictors of response.Keywords: child sexual abuse, chlmaydia trachmotis infection, single-dose azithromycin, weight less than or equal to 45 kilograms
Procedia PDF Downloads 291731 Spatial Suitability Assessment of Onshore Wind Systems Using the Analytic Hierarchy Process
Authors: Ayat-Allah Bouramdane
Abstract:
Since 2010, there have been sustained decreases in the unit costs of onshore wind energy and large increases in its deployment, varying widely across regions. In fact, the onshore wind production is affected by air density— because cold air is more dense and therefore more effective at producing wind power— and by wind speed—as wind turbines cannot operate in very low or extreme stormy winds. The wind speed is essentially affected by the surface friction or the roughness and other topographic features of the land, which slow down winds significantly over the continent. Hence, the identification of the most appropriate locations of onshore wind systems is crucial to maximize their energy output and therefore minimize their Levelized Cost of Electricity (LCOE). This study focuses on the preliminary assessment of onshore wind energy potential, in several areas in Morocco with a particular focus on the Dakhla city, by analyzing the diurnal and seasonal variability of wind speed for different hub heights, the frequency distribution of wind speed, the wind rose and the wind performance indicators such as wind power density, capacity factor, and LCOE. In addition to climate criterion, other criteria (i.e., topography, location, environment) were selected fromGeographic Referenced Information (GRI), reflecting different considerations. The impact of each criterion on the suitability map of onshore wind farms was identified using the Analytic Hierarchy Process (AHP). We find that the majority of suitable zones are located along the Atlantic Ocean and the Mediterranean Sea. We discuss the sensitivity of the onshore wind site suitability to different aspects such as the methodology—by comparing the Multi-Criteria Decision-Making (MCDM)-AHP results to the Mean-Variance Portfolio optimization framework—and the potential impact of climate change on this suitability map, and provide the final recommendations to the Moroccan energy strategy by analyzing if the actual Morocco's onshore wind installations are located within areas deemed suitable. This analysis may serve as a decision-making framework for cost-effective investment in onshore wind power in Morocco and to shape the future sustainable development of the Dakhla city.Keywords: analytic hierarchy process (ahp), dakhla, geographic referenced information, morocco, multi-criteria decision-making, onshore wind, site suitability.
Procedia PDF Downloads 167730 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator
Authors: Gómez R. Marta, Martín M. Jesús María
Abstract:
The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.Keywords: atmospheric dispersion, dioxin, furan, incinerator
Procedia PDF Downloads 215729 Laboratory Diagnostic Testing of Peste des Petits Ruminants in Georgia
Authors: Nino G. Vepkhvadze, Tea Enukidze
Abstract:
Every year the number of countries around the world face the risk of the spread of infectious diseases that bring significant ecological and social-economic damage. Hence, the importance of food product safety is emphasized that is the issue of interest for many countries. To solve them, it’s necessary to conduct preventive measures against the diseases, have accurate diagnostic results, leadership, and management. The Peste des petits ruminants (PPR) disease is caused by a morbillivirus closely related to the rinderpest virus. PPR is a transboundary disease as it emerges and evolves, considered as one of the top most damaging animal diseases. The disease imposed a serious threat to sheep-breeding when the farms of sheep, goats are significantly growing within the country. In January 2016, PPR was detected in Georgia. Up to present the origin of the virus, the age relationship of affected ruminants and the distribution of PPRV in Georgia remains unclear. Due to the nature of PPR, and breeding practices in the country, reemerging of the disease in Georgia is highly likely. The purpose of the studies is to provide laboratories with efficient tools allowing the early detection of PPR emergence and re-emergences. This study is being accomplished under the Biological Threat Reduction Program project with the support of the Defense Threat Reduction Agency (DTRA). The purpose of the studies is to investigate the samples and identify areas at high risk of the disease. Georgia has a high density of small ruminant herds bred as free-ranging, close to international borders. Kakheti region, Eastern Georgia, will be considered as area of high priority for PPR surveillance. For this reason, in 2019, in Kakheti region investigated n=484 sheep and goat serum and blood samples from the same animals, utilized serology and molecular biology methods. All samples were negative by RT-PCR, and n=6 sheep samples were seropositive by ELISA-Ab. Future efforts will be concentrated in areas where the risk of PPR might be high such as international bordering regions of Georgia. For diagnostics, it is important to integrate the PPRV knowledge with epidemiological data. Based on these diagnostics, the relevant agencies will be able to control the disease surveillance.Keywords: animal disease, especially dangerous pathogen, laboratory diagnostics, virus
Procedia PDF Downloads 114728 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters
Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić
Abstract:
The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water
Procedia PDF Downloads 100727 Mesenteric Ischemia Presenting as Acalculous Cholecystitis: A Case Review of a Rare Complication and Aberrant Anatomy
Authors: Joshua Russell, Omar Zubair, Reuben Ndegwa
Abstract:
Introduction: Mesenteric ischemia is an uncommon condition that can be challenging to diagnose in the acute setting, with the potential for significant morbidity and mortality. Very rarely has acute acalculous cholecystitis been described in the setting of mesenteric ischemia. Case: This was the case in a 78-year-old male, who initially presented with clinical and radiological evidence of small bowel obstruction, thought likely secondary to malignancy. The patient had a 6-week history of anorexia, worsening lower abdominal pain, and ~30kg of unintentional weight loss over a 12-month period and a CT- scan demonstrated a transition point in the distal ileum. The patient became increasingly hemodynamically unstable and peritonitic, and an emergency laparotomy was performed. Intra-operatively, however, no obvious transition point was identified, and instead, the gallbladder was markedly gangrenous and oedematous, consistent with acalculous cholecystitis. An open total cholecystectomy was subsequently performed. The patient was admitted to the Intensive Care Unit post-operatively and continued to deteriorate over the proceeding 48 hours, with two re-look laparotomies demonstrating progressively worsening bowel ischemia, initially in the distribution of the superior mesenteric artery and then the coeliac trunk. On review, the patient was found to have an aberrant right hepatic artery arising from the superior mesenteric artery. The extent of ischemia was considered non-survivable, and the patient was palliated. Discussion: Multiple theories currently exist for the underlying pathophysiology of acalculous cholecystitis, including biliary stasis, sepsis, and ischemia. This case lends further support to ischemia as the underlying etiology of acalculous cholecystitis. This is particularly the case when considered in the context of the patient’s aberrant right hepatic artery arising from the superior mesenteric artery, which occurs in 11-14% of patients. Conclusion: This case report adds further insight to the debate surrounding the pathophysiology of acalculous cholecystitis. It also presents acalculous cholecystitis as a complication of mesenteric ischemia that should always be considered, especially in the elderly patient and in the context of relatively common anatomical variations. Furthermore, the case brings to attention the importance of maintaining dynamic working diagnoses in the setting of evolving pathophysiology and clinical presentations.Keywords: acalculous cholecystitis, anatomical variation, general surgery, mesenteric ischemia
Procedia PDF Downloads 190726 A Systematic Review: Prevalence and Risk Factors of Low Back Pain among Waste Collection Workers
Authors: Benedicta Asante, Brenna Bath, Olugbenga Adebayo, Catherine Trask
Abstract:
Background: Waste Collection Workers’ (WCWs) activities contribute greatly to the recycling sector and are an important component of the waste management industry. As the recycling sector evolves, reports of injuries and fatal accidents in the industry demand notice particularly common and debilitating musculoskeletal disorders such as low back pain (LBP). WCWs are likely exposed to diverse work-related hazards that could contribute to LBP. However, to our knowledge there has never been a systematic review or other synthesis of LBP findings within this workforce. The aim of this systematic review was to determine the prevalence and risk factors of LBP among WCWs. Method: A comprehensive search was conducted in Ovid Medline, EMBASE, and Global Health e-publications with search term categories ‘low back pain’ and ‘waste collection workers’. Articles were screened at title, abstract, and full-text stages by two reviewers. Data were extracted on study design, sampling strategy, socio-demographic, geographical region, and exposure definition, definition of LBP, risk factors, response rate, statistical techniques, and LBP prevalence. Risk of bias (ROB) was assessed based on Hoy Damien’s ROB scale. Results: The search of three databases generated 79 studies. Thirty-two studies met the study inclusion criteria for both title and abstract; thirteen full-text articles met the study criteria at the full-text stage. Seven articles (54%) reported prevalence within 12 months of LBP between 42-82% among WCW. The major risk factors for LBP among WCW included: awkward posture; lifting; pulling; pushing; repetitive motions; work duration; and physical loads. Summary data and syntheses of findings was presented in trend-lines and tables to establish the several prevalence periods based on age and region distribution. Public health implications: LBP is a major occupational hazard among WCWs. In light of these risks and future growth in this industry, further research should focus on more detail ergonomic exposure assessment and LBP prevention efforts.Keywords: low back pain, scavenger, waste collection workers, waste pickers
Procedia PDF Downloads 326725 Valorization of Plastic and Cork Wastes in Design of Composite Materials
Authors: Svetlana Petlitckaia, Toussaint Barboni, Paul-Antoine Santoni
Abstract:
Plastic is a revolutionary material. However, the pollution caused by plastics damages the environment, human health and the economy of different countries. It is important to find new ways to recycle and reuse plastic material. The use of waste materials as filler and as a matrix for composite materials is receiving increasing attention as an approach to increasing the economic value of streams. In this study, a new composite material based on high-density polyethylene (HDPE) and polypropylene (PP) wastes from bottle caps and cork powder from unused cork (virgin cork), which has a high capacity for thermal insulation, was developed. The composites were prepared with virgin and modified cork. The composite materials were obtained through twin-screw extrusion and injection molding. The composites were produced with proportions of 0 %, 5 %, 10 %, 15 %, and 20 % of cork powder in a polymer matrix with and without coupling agent and flame retardant. These composites were investigated in terms of mechanical, structural and thermal properties. The effect of cork fraction, particle size and the use of flame retardant on the properties of composites were investigated. The properties of samples elaborated with the polymer and the cork were compared to them with the coupling agent and commercial flame retardant. It was observed that the morphology of HDPE/cork and PP/cork composites revealed good distribution and dispersion of cork particles without agglomeration. The results showed that the addition of cork powder in the polymer matrix reduced the density of the composites. However, the incorporation of natural additives doesn’t have a significant effect on water adsorption. Regarding the mechanical properties, the value of tensile strength decreases with the addition of cork powder, ranging from 30 MPa to 19 MPa for PP composites and from 19 MPa to 17 MPa for HDPE composites. The value of thermal conductivity of composites HDPE/cork and PP/ cork is about 0.230 W/mK and 0.170 W/mK, respectively. Evaluation of the flammability of the composites was performed using a cone calorimeter. The results of thermal analysis and fire tests show that it is important to add flame retardants to improve fire resistance. The samples elaborated with the coupling agent and flame retardant have better mechanical properties and fire resistance. The feasibility of the composites based on cork and PP and HDPE wastes opens new ways of valorizing plastic waste and virgin cork. The formulation of composite materials must be optimized.Keywords: composite materials, cork and polymer wastes, flammability, modificated cork
Procedia PDF Downloads 85