Search results for: interdisciplinary production processes
392 Implementation of a Web-Based Clinical Outcomes Monitoring and Reporting Platform across the Fortis Network
Authors: Narottam Puri, Bishnu Panigrahi, Narayan Pendse
Abstract:
Background: Clinical Outcomes are the globally agreed upon, evidence-based measurable changes in health or quality of life resulting from the patient care. Reporting of outcomes and its continuous monitoring provides an opportunity for both assessing and improving the quality of patient care. In 2012, International Consortium Of HealthCare Outcome Measurement (ICHOM) was founded which has defined global Standard Sets for measuring the outcome of various treatments. Method: Monitoring of Clinical Outcomes was identified as a pillar of Fortis’ core value of Patient Centricity. The project was started as an in-house developed Clinical Outcomes Reporting Portal by the Fortis Medical IT team. Standard sets of Outcome measurement developed by ICHOM were used. A pilot was run at Fortis Escorts Heart Institute from Aug’13 – Dec’13.Starting Jan’14, it was implemented across 11 hospitals of the group. The scope was hospital-wide and major clinical specialties: Cardiac Sciences, Orthopedics & Joint Replacement were covered. The internally developed portal had its limitations of report generation and also capturing of Patient related outcomes was restricted. A year later, the company provisioned for an ICHOM Certified Software product which could provide a platform for data capturing and reporting to ensure compliance with all ICHOM requirements. Post a year of the launch of the software; Fortis Healthcare has become the 1st Healthcare Provider in Asia to publish Clinical Outcomes data for the Coronary Artery Disease Standard Set comprising of Coronary Artery Bypass Graft and Percutaneous Coronary Interventions) in the public domain. (Jan 2016). Results: This project has helped in firmly establishing a culture of monitoring and reporting Clinical Outcomes across Fortis Hospitals. Given the diverse nature of the healthcare delivery model at Fortis Network, which comprises of hospitals of varying size and specialty-mix and practically covering the entire span of the country, standardization of data collection and reporting methodology is a huge achievement in itself. 95% case reporting was achieved with more than 90% data completion at the end of Phase 1 (March 2016). Post implementation the group now has one year of data from its own hospitals. This has helped identify the gaps and plan towards ways to bridge them and also establish internal benchmarks for continual improvement. Besides the value created for the group includes: 1. Entire Fortis community has been sensitized on the importance of Clinical Outcomes monitoring for patient centric care. Initial skepticism and cynicism has been countered by effective stakeholder engagement and automation of processes. 2. Measuring quality is the first step in improving quality. Data analysis has helped compare clinical results with best-in-class hospitals and identify improvement opportunities. 3. Clinical fraternity is extremely pleased to be part of this initiative and has taken ownership of the project. Conclusion: Fortis Healthcare is the pioneer in the monitoring of Clinical Outcomes. Implementation of ICHOM standards has helped Fortis Clinical Excellence Program in improving patient engagement and strengthening its commitment to its core value of Patient Centricity. Validation and certification of the Clinical Outcomes data by an ICHOM Certified Supplier adds confidence to its claim of being leaders in this space.Keywords: clinical outcomes, healthcare delivery, patient centricity, ICHOM
Procedia PDF Downloads 237391 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus
Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze
Abstract:
Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model
Procedia PDF Downloads 343390 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 9389 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions
Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana
Abstract:
Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)
Procedia PDF Downloads 155388 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 302387 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014
Authors: R. Mofeed, N. Elgendy
Abstract:
Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.Keywords: cinema, informal urbanism, popular mind, representation
Procedia PDF Downloads 296386 The Possible Interaction between Bisphenol A, Caffeine and Epigallocatechin-3-Gallate on Neurotoxicity Induced by Manganese in Rats
Authors: Azza A. Ali, Hebatalla I. Ahmed, Asmaa Abdelaty
Abstract:
Background: Manganese (Mn) is a naturally occurring element. Exposure to high levels of Mn causes neurotoxic effects and represents an environmental risk factor. Mn neurotoxicity is poorly understood but changing of AChE activity, monoamines and oxidative stress has been established. Bisphenol A (BPA) is a synthetic compound widely used in the production of polycarbonate plastics. There is considerable debate about whether its exposure represents an environmental risk. Caffeine is one of the major contributors to the dietary antioxidants which prevent oxidative damage and may reduce the risk of chronic neurodegenerative diseases. Epigallocatechin-3-gallate is another major component of green tea and has known interactions with caffeine. It also has health-promoting effects in CNS. Objective: To evaluate the potential protective effects of Caffeine and/or EGCG against Mn-induced neurotoxicity either alone or in the presence of BPA in rats. Methods: Seven groups of rats were used and received daily for 5 weeks MnCl2.4H2O (10 mg/kg, IP) except the control group which received saline, corn oil and distilled H2O. Mn was injected either alone or in combination with each of the following: BPA (50 mg/kg, PO), caffeine (10 mg/kg, PO), EGCG (5 mg/kg, IP), caffeine + EGCG and BPA +caffeine +EGCG. All rats were examined in five behavioral tests (grid, bar, swimming, open field and Y- maze tests). Biochemical changes in monoamines, caspase-3, PGE2, GSK-3B, glutamate, acetyl cholinesterase and oxidative parameters, as well as histopathological changes in the brain, were also evaluated for all groups. Results: Mn significantly increased MDA and nitrite content as well as caspase-3, GSK-3B, PGE2 and glutamate levels while significantly decreased TAC and SOD as well as cholinesterase in the striatum. It also decreased DA, NE and 5-HT levels in the striatum and frontal cortex. BPA together with Mn enhanced oxidative stress generation induced by Mn while increased monoamine content that was decreased by Mn in rat striatum. BPA abolished neuronal degeneration induced by Mn in the hippocampus but not in the substantia nigra, striatum and cerebral cortex. Behavioral examinations showed that caffeine and EGCG co-administration had more pronounced protective effect against Mn-induced neurotoxicity than each one alone. EGCG alone or in combination with caffeine prevented neuronal degeneration in the substantia nigra, striatum, hippocampus and cerebral cortex induced by Mn while caffeine alone prevented neuronal degeneration in the substantia nigra and striatum but still showed some nuclear pyknosis in cerebral cortex and hippocampus. The marked protection of caffeine and EGCG co-administration also confirmed by the significant increase in TAC, SOD, ACHE, DA, NE and 5-HT as well as the decrease in MDA, nitrite, caspase-3, PGE2, GSK-3B, the glutamic acid in the striatum. Conclusion: Neuronal degeneration induced by Mn showed some inhibition with BPA exposure despite the enhancement in oxidative stress generation. Co-administration of EGCG and caffeine can protect against neuronal degeneration induced by Mn and improve behavioral deficits associated with its neurotoxicity. The protective effect of EGCG was more pronounced than that of caffeine even with BPA co-exposure.Keywords: manganese, bisphenol a, caffeine, epigallocatechin-3-gallate, neurotoxicity, behavioral tests, rats
Procedia PDF Downloads 228385 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 49384 The Social Ecology of Serratia entomophila: Pathogen of Costelytra giveni
Authors: C. Watson, T. Glare, M. O'Callaghan, M. Hurst
Abstract:
The endemic New Zealand grass grub (Costelytra giveni, Coleoptera: Scarabaeidae) is an economically significant grassland pest in New Zealand. Due to their impacts on production within the agricultural sector, one of New Zealand's primary industries, several methods are being used to either control or prevent the establishment of new grass grub populations in the pasture. One such method involves the use of a biopesticide based on the bacterium Serratia entomophila. This species is one of the causative agents of amber disease, a chronic disease of the larvae which results in death via septicaemia after approximately 2 to 3 months. The ability of S. entomophila to cause amber disease is dependant upon the presence of the amber disease associated plasmid (pADAP), which encodes for the key virulence determinants required for the establishment and maintenance of the disease. Following the collapse of grass grub populations within the soil, resulting from either natural population build-up or application of the bacteria, non-pathogenic plasmid-free Serratia strains begin to predominate within the soil. Whilst the interactions between S. entomophila and grass grub larvae are well studied, less information is known on the interactions between plasmid-bearing and plasmid-free strains, particularly the potential impact of these interactions upon the efficacy of an applied biopesticide. Using a range of constructed strains with antibiotic tags, in vitro (broth culture) and in vivo (soil and larvae) experiments were conducted using inoculants comprised of differing ratios of isogenic pathogenic and non-pathogenic Serratia strains, enabling the relative growth of pADAP+ and pADAP- strains under competition conditions to be assessed. In nutrient-rich, the non-pathogenic pADAP- strain outgrew the pathogenic pADAP+ strain by day 3 when inoculated in equal quantities, and by day 5 when applied as the minority inoculant, however, there was an overall gradual decline in the number of viable bacteria for both strains over a 7-day period. Similar results were obtained in additional experiments using the same strains and continuous broth cultures re-inoculated at 24-hour intervals, although in these cultures, the viable cell count did not diminish over the 7-day period. When the same ratios were assessed in soil microcosms with limited available nutrients, the strains remained relatively stable over a 2-month period. Additionally, in vivo grass grub co-infections assays using the same ratios of tagged Serratia strains revealed similar results to those observed in the soil, but there was also evidence of horizontal transfer of pADAP from the pathogenic to the non-pathogenic strain within the larval gut after a period of 4 days. Whilst the influence of competition is more apparent in broth cultures than within the soil or larvae, further testing is required to determine whether this competition between pathogenic and non-pathogenic Serratia strains has any influence on efficacy and disease progression, and how this may impact on the ability of S. entomophila to cause amber disease within grass grub larvae when applied as a biopesticide.Keywords: biological control, entomopathogen, microbial ecology, New Zealand
Procedia PDF Downloads 156383 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture
Authors: Zakia Hbellaq
Abstract:
The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants
Procedia PDF Downloads 161382 Diversity and Use of Agroforestry Yards of Family Farmers of Ponte Alta – Gama, Federal District, Brazil
Authors: Kever Bruno Paradelo Gomes, Rosana Carvalho Martins
Abstract:
The home gardens areas are production systems, which are located near the homes and are quite common in the tropics. They consist of agricultural and forest species and may also involve the raising of small animals to produce food for subsistence as well as income generation, with a special focus on the conservation of biodiversity. Home gardens are diverse Agroforestry systems with multiple uses, among many, food security, income aid, traditional medicine. The work was carried out on rural properties of the family farmers of the Ponte Alta Rural Nucleus, Gama Administrative Region, in the city of Brasília, Federal District- Brazil. The present research is characterized methodologically as a quantitative, exploratory and descriptive nature. The instruments used in this research were: bibliographic survey and semi-structured questionnaire. The data collection was performed through the application of a semi-structured questionnaire, containing questions that referred to the perception and behavior of the interviewed producer on the subject under analysis. In each question, the respondent explained his knowledge about sustainability, agroecological practices, environmental legislation, conservation methods, forest and medicinal species, ago social and socioeconomic characteristics, use and purpose of agroforestry and technical assistance. The sample represented 55.62% of the universe of the study. We interviewed 99 people aged 18-83 years, with a mean age of 49 years. The low level of education, coupled with the lack of training and guidance for small family farmers in the Ponte Alta Rural Nucleus, is one of the limitations to the development of practices oriented towards sustainable and agroecological agriculture in the nucleus. It is observed that 50.5% of the interviewed people landed with agroforestry yards less than 20 years ago, and only 16.17% of them are older than 35 years. In identifying agriculture as the main activity of most of the rural properties studied, attention is drawn to the cultivation of medicinal plants, fruits and crops as the most extracted products. However, it is verified that the crops in the backyards have the exclusive purpose of family consumption, which could be complemented with the marketing of the surplus, as well as with the aggregation of value to the cultivated products. Initiatives such as this may contribute to the increase in family income and to the motivation and value of the crop in agroecological gardens. We conclude that home gardens of Ponte Alta are highly diverse thus contributing to local biodiversity conservation of are managed by women to ensure food security and allows income generation. The tradition of existing knowledge on the use and management of the diversity of resources used in agroforestry yards is of paramount importance for the development of sustainable alternative practices.Keywords: agriculture, agroforestry system, rural development, sustainability
Procedia PDF Downloads 141381 Cross-Language Variation and the ‘Fused’ Zone in Bilingual Mental Lexicon: An Experimental Research
Authors: Yuliya E. Leshchenko, Tatyana S. Ostapenko
Abstract:
Language variation is a widespread linguistic phenomenon which can affect different levels of a language system: phonological, morphological, lexical, syntactic, etc. It is obvious that the scope of possible standard alternations within a particular language is limited by a variety of its norms and regulations which set more or less clear boundaries for what is possible and what is not possible for the speakers. The possibility of lexical variation (alternate usage of lexical items within the same contexts) is based on the fact that the meanings of words are not clearly and rigidly defined in the consciousness of the speakers. Therefore, lexical variation is usually connected with unstable relationship between words and their referents: a case when a particular lexical item refers to different types of referents, or when a particular referent can be named by various lexical items. We assume that the scope of lexical variation in bilingual speech is generally wider than that observed in monolingual speech due to the fact that, besides ‘lexical item – referent’ relations it involves the possibility of cross-language variation of L1 and L2 lexical items. We use the term ‘cross-language variation’ to denote a case when two equivalent words of different languages are treated by a bilingual speaker as freely interchangeable within the common linguistic context. As distinct from code-switching which is traditionally defined as the conscious use of more than one language within one communicative act, in case of cross-language lexical variation the speaker does not perceive the alternate lexical items as belonging to different languages and, therefore, does not realize the change of language code. In the paper, the authors present research of lexical variation of adult Komi-Permyak – Russian bilingual speakers. The two languages co-exist on the territory of the Komi-Permyak District in Russia (Komi-Permyak as the ethnic language and Russian as the official state language), are usually acquired from birth in natural linguistic environment and, according to the data of sociolinguistic surveys, are both identified by the speakers as coordinate mother tongues. The experimental research demonstrated that alternation of Komi-Permyak and Russian words within one utterance/phrase is highly frequent both in speech perception and production. Moreover, our participants estimated cross-language word combinations like ‘маленькая /Russian/ нывка /Komi-Permyak/’ (‘a little girl’) or ‘мунны /Komi-Permyak/ домой /Russian/’ (‘go home’) as regular/habitual, containing no violation of any linguistic rules and being equally possible in speech as the equivalent intra-language word combinations (‘учöтик нывка’ /Komi-Permyak/ or ‘идти домой’ /Russian/). All the facts considered, we claim that constant concurrent use of the two languages results in the fact that a large number of their words tend to be intuitively interpreted by the speakers as lexical variants not only related to the same referent, but also referring to both languages or, more precisely, to none of them in particular. Consequently, we can suppose that bilingual mental lexicon includes an extensive ‘fused’ zone of lexical representations that provide the basis for cross-language variation in bilingual speech.Keywords: bilingualism, bilingual mental lexicon, code-switching, lexical variation
Procedia PDF Downloads 148380 Effect of Energy Management Practices on Sustaining Competitive Advantage among Manufacturing Firms: A Case of Selected Manufacturers in Nairobi, Kenya
Authors: Henry Kiptum Yatich, Ronald Chepkilot, Aquilars Mutuku Kalio
Abstract:
Studies on energy management have focused on environmental conservation, reduction in production and operation expenses. However, transferring gains of energy management practices to competitive advantage is importance to manufacturers in Kenya. Success in managing competitive advantage arises out of a firm’s ability in identifying and implementing actions that can give the company an edge over its rivals. Manufacturing firms in Kenya are the highest consumers of both electricity and petroleum products. In this regard, the study posits that transfer of the gains of energy management practices to competitive advantage is imperative. The study was carried in Nairobi and its environs, which hosts the largest number of manufacturers. The study objectives were; to determine the level of implementing energy management regulations on sustaining competitive advantage, to determine the level of implementing company energy management policy on competitive advantage, to examine the level of implementing energy efficient technology on sustaining competitive advantage, and to assess the percentage energy expenditure on sustaining competitive advantage among manufacturing firms. The study adopted a survey research design, with a study population of 145,987. A sample of 384 respondents was selected randomly from 21 proportionately selected firms. Structured questionnaires were used to collect data. Data analysis was done using descriptive statistics (mean and standard deviations) and inferential statistics (correlation, regression, and T-test). Data is presented using tables and diagrams. The study found that Energy Management Regulations, Company Energy Management Policies, and Energy Expenses are significant predictors of Competitive Advantage (CA). However, Energy Efficient Technology as a component of Energy Management Practices did not have a significant relationship with Competitive Advantage. The study revealed that the level of awareness in the sector stood at 49.3%. Energy Expenses in the sector stood at an average of 10.53% of the firm’s total revenue. The study showed that gains from energy efficiency practices can be transferred to competitive strategies so as to improve firm competitiveness. The study recommends that manufacturing firms should consider energy management practices as part of its strategic agenda in assessing and reviewing their energy management practices as possible strategies for sustaining competitiveness. The government agencies such as Energy Regulatory Commission, the Ministry of Energy and Petroleum, and Kenya Association of Manufacturers should enforce the energy management regulations 2012, and with enhanced stakeholder involvement and sensitization so as promote sustenance of firm competitiveness. Government support in providing incentives and rebates for acquisition of energy efficient technologies should be pursued. From the study limitation, future experimental and longitudinal studies need to be carried out. It should be noted that energy management practices yield enormous benefits to all stakeholders and that the practice should not be considered a competitive tool but rather as a universal practice.Keywords: energy, efficiency, management, guidelines, policy, technology, competitive advantage
Procedia PDF Downloads 384379 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications
Authors: Gema M. Rodado, Jose M. Olavarrieta
Abstract:
Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests
Procedia PDF Downloads 117378 Facies, Diagenetic Analysis and Sequence Stratigraphy of Habib Rahi Formation Dwelling in the Vicinity of Jacobabad Khairpur High, Southern Indus Basin, Pakistan
Authors: Muhammad Haris, Syed Kamran Ali, Mubeen Islam, Tariq Mehmood, Faisal Shah
Abstract:
Jacobabad Khairpur High, part of a Sukkur rift zone, is the separating boundary between Central and Southern Indus Basin, formed as a result of Post-Jurassic uplift after the deposition of Middle Jurassic Chiltan Formation. Habib Rahi Formation of Middle to Late Eocene outcrops in the vicinity of Jacobabad Khairpur High, a section at Rohri near Sukkur is measured in detail for lithofacies, microfacies, diagenetic analysis and sequence stratigraphy. Habib Rahi Formation is richly fossiliferous and consists of mostly limestone with subordinate clays and marl. The total thickness of the formation in this section is 28.8m. The bottom of the formation is not exposed, while the upper contact with the Sirki Shale of the Middle Eocene age is unconformable in some places. A section is measured using Jacob’s Staff method, and traverses were made perpendicular to the strike. Four different lithofacies were identified based on outcrop geology which includes coarse-grained limestone facies (HR-1 to HR-5), massive bedded limestone facies (HR-6 HR-7), and micritic limestone facies (HR-8 to HR-13) and algal dolomitic limestone facie (HR-14). Total 14 rock samples were collected from outcrop for detailed petrographic studies, and thin sections of respective samples were prepared and analyzed under the microscope. On the basis of Dunham’s (1962) classification systems after studying textures, grain size, and fossil content and using Folk’s (1959) classification system after reviewing Allochems type, four microfacies were identified. These microfacies include HR-MF 1: Benthonic Foraminiferal Wackstone/Biomicrite Microfacies, HR-MF 2: Foramineral Nummulites Wackstone-Packstone/Biomicrite Microfacies HR-MF 3: Benthonic Foraminiferal Packstone/Biomicrite Microfacies, HR-MF 4: Bioclasts Carbonate Mudstone/Micrite Microfacies. The abundance of larger benthic Foraminifera’s (LBF), including Assilina sp., A. spiral abrade, A. granulosa, A. dandotica, A. laminosa, Nummulite sp., N. fabiani, N. stratus, N. globulus, Textularia, Bioclasts, and Red algae indicates shallow marine (Tidal Flat) environment of deposition. Based on variations in rock types, grain size, and marina fauna Habib Rahi Formation shows progradational stacking patterns, which indicates coarsening upward cycles. The second order of sea-level rise is identified (spanning from Y-Persian to Bartonian age) that represents the Transgressive System Tract (TST) and a third-order Regressive System Tract (RST) (spanning from Bartonian to Priabonian age). Diagenetic processes include fossils replacement by mud, dolomitization, pressure dissolution associated stylolites features and filling with dark organic matter. The presence of the microfossils includes Nummulite. striatus, N. fabiani, and Assilina. dandotica, signify Bartonian to Priabonian age of Habib Rahi Formation.Keywords: Jacobabad Khairpur High, Habib Rahi Formation, lithofacies, microfacies, sequence stratigraphy, diagenetic history
Procedia PDF Downloads 473377 Enhancing Photocatalytic Activity of Oxygen Vacancies-Rich Tungsten Trioxide (WO₃) for Sustainable Energy Conversion and Water Purification
Authors: Satam Alotibi, Osama A. Hussein, Aziz H. Al-Shaibani, Nawaf A. Al-Aqeel, Abdellah Kaiba, Fatehia S. Alhakami, Mohammed Alyami, Talal F. Qahtan
Abstract:
The demand for sustainable and efficient energy conversion using solar energy has grown rapidly in recent years. In this pursuit, solar-to-chemical conversion has emerged as a promising approach, with oxygen vacancies-rich tungsten trioxide (WO₃) playing a crucial role. This study presents a method for synthesizing oxygen vacancies-rich WO3, resulting in a significant enhancement of its photocatalytic activity, representing a significant step towards sustainable energy solutions. Experimental results underscore the importance of oxygen vacancies in modifying the properties of WO₃. These vacancies introduce additional energy states within the material, leading to a reduction in the bandgap, increased light absorption, and acting as electron traps, thereby reducing emissions. Our focus lies in developing oxygen vacancies-rich WO₃, which demonstrates unparalleled potential for improved photocatalytic applications. The effectiveness of oxygen vacancies-rich WO₃ in solar-to-chemical conversion was showcased through rigorous assessments of its photocatalytic degradation performance. Sunlight irradiation was employed to evaluate the material's effectiveness in degrading organic pollutants in wastewater. The results unequivocally demonstrate the superior photocatalytic performance of oxygen vacancies-rich WO₃ compared to conventional WO₃ nanomaterials, establishing its efficacy in sustainable and efficient energy conversion. Furthermore, the synthesized material is utilized to fabricate films, which are subsequently employed in immobilized WO₃ and oxygen vacancies-rich WO₃ reactors for water purification under natural sunlight irradiation. This application offers a sustainable and efficient solution for water treatment, harnessing solar energy for effective decontamination. In addition to investigating the photocatalytic capabilities, we extensively analyze the structural and chemical properties of the synthesized material. The synthesis process involves in situ thermal reduction of WO₃ nano-powder in a nitrogen environment, meticulously monitored using thermogravimetric analysis (TGA) to ensure precise control over the synthesis of oxygen vacancies-rich WO₃. Comprehensive characterization techniques such as UV-Vis spectroscopy, X-ray photoelectron spectroscopy (XPS), FTIR, Raman spectroscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and selected area electron diffraction (SAED) provide deep insights into the material's optical properties, chemical composition, elemental states, structure, surface properties, and crystalline structure. This study represents a significant advancement in sustainable energy conversion through solar-to-chemical processes and water purification. By harnessing the unique properties of oxygen vacancies-rich WO₃, we not only enhance our understanding of energy conversion mechanisms but also pave the way for the development of highly efficient and environmentally friendly photocatalytic materials. The application of this material in water purification demonstrates its versatility and potential to address critical environmental challenges. These findings bring us closer to a sustainable energy future and cleaner water resources, laying a solid foundation for a more sustainable planet.Keywords: sustainable energy conversion, solar-to-chemical conversion, oxygen vacancies-rich tungsten trioxide (WO₃), photocatalytic activity enhancement, water purification
Procedia PDF Downloads 69376 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency
Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader
Abstract:
Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.Keywords: immobilization, reusability, thermozymes, xylanase
Procedia PDF Downloads 374375 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion
Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov
Abstract:
Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing
Procedia PDF Downloads 134374 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load
Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas
Abstract:
The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.Keywords: experimental study, Infill wall, Infilled frame, masonry wall
Procedia PDF Downloads 77373 Bisphenol-A Concentrations in Urine and Drinking Water Samples of Adults Living in Ankara
Authors: Hasan Atakan Sengul, Nergis Canturk, Bahar Erbas
Abstract:
Drinking water is indispensable for life. With increasing awareness of communities, the content of drinking water and tap water has been a matter of curiosity. The presence of Bisphenol-A is the top one when content curiosity is concerned. The most used chemical worldwide for production of polycarbonate plastics and epoxy resins is Bisphenol-A. People are exposed to Bisphenol-A chemical, which disrupts the endocrine system, almost every day. Each year it is manufactured an average of 5.4 billion kilograms of Bisphenol-A. Linear formula of Bisphenol-A is (CH₃)₂C(C₆H₄OH)₂, its molecular weight is 228.29 and CAS number is 80-05-7. Bisphenol-A is known to be used in the manufacturing of plastics, along with various chemicals. Bisphenol-A, an industrial chemical, is used in the raw materials of packaging mate-rials in the monomers of polycarbonate and epoxy resins. The pass through the nutrients of Bisphenol-A substance happens by packaging. This substance contaminates with nutrition and penetrates into body by consuming. International researches show that BPA is transported through body fluids, leading to hormonal disorders in animals. Experimental studies on animals report that BPA exposure also affects the gender of the newborn and its time to reach adolescence. The extent to what similar endocrine disrupting effects are on humans is a debate topic in many researches. In our country, detailed studies on BPA have not been done. However, it is observed that 'BPA-free' phrases are beginning to appear on plastic packaging such as baby products and water carboys. Accordingly, this situation increases the interest of the society about the subject; yet it causes information pollution. In our country, all national and international studies on exposure to BPA have been examined and Ankara province has been designated as testing region. To assess the effects of plastic use in daily habits of people and the plastic amounts removed out of the body, the results of the survey conducted with volunteers who live in Ankara has been analyzed with Sciex appliance by means of LC-MS/MS in the laboratory and the amount of exposure and BPA removal have been detected by comparing the results elicited before. The results have been compared with similar studies done in international arena and the relation between them has been exhibited. Consequently, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine. This has also revealed that environmental exposure and the habits of daily plastic use have also direct effects a human body. When the amount of BPA in drinking water is considered; minimum 0.028 µg/L, maximum 1.136 µg/L, mean 0.29194 µg/L and SD(standard deviation)= 0.199 have been detected. When the amount of BPA in urine is considered; minimum 0.028 µg/L, maximum 0.48 µg/L, mean 0.19181 µg/L and SD= 0.099 have been detected. In conclusion, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine (r= -0.151). The p value of the comparison between drinking water’s and urine’s BPA amounts is 0.004 which shows that there is a significant change and the amounts of BPA in urine is dependent on the amounts in drinking waters (p < 0.05). This has revealed that environmental exposure and daily plastic habits have also direct effects on the human body.Keywords: analyze of bisphenol-A, BPA, BPA in drinking water, BPA in urine
Procedia PDF Downloads 128372 Chemicals to Remove and Prevent Biofilm
Authors: Cynthia K. Burzell
Abstract:
Aequor's Founder, a Marine and Medical Microbiologist, discovered novel, non-toxic chemicals in the ocean that uniquely remove biofilm in minutes and prevent its formation for days. These chemicals and over 70 synthesized analogs that Aequor developed can replace thousands of toxic biocides used in consumer and industrial products and, as new drug candidates, kill biofilm-forming bacteria and fungi Superbugs -the antimicrobial-resistant (AMR) pathogens for which there is no cure. Cynthia Burzell, PhD., is a Marine and Medical Microbiologist studying natural mechanisms that inhibit biofilm formation on surfaces in contact with water. In 2002, she discovered a new genus and several new species of marine microbes that produce small molecules that remove biofilm in minutes and prevent its formation for days. The molecules include new antimicrobials that can replace thousands of toxic biocides used in consumer and industrial products and can be developed into new drug candidates to kill the biofilm-forming bacteria and fungi -- including the antimicrobial-resistant (AMR) Superbugs for which there is no cure. Today, Aequor has over 70 chemicals that are divided into categories: (1) Novel natural chemicals. Lonza validated that the primary natural chemical removed biofilm in minutes and stated: "Nothing else known can do this at non-toxic doses." (2) Specialty chemicals. 25 of these structural analogs are already approved under the U.S. Environmental Protection Agency (EPA)'s Toxic Substances Control Act, certified as "green" and available for immediate sale. These have been validated for the following agro-industrial verticals: (a) Surface cleaners: The U.S. Department of Agriculture validated that low concentrations of Aequor's formulations provide deep cleaning of inert, nano and organic surfaces and materials; (b) Water treatments: NASA validated that one dose of Aequor's treatment in the International Space Station's water reuse/recycling system lasted 15 months without replenishment. DOE validated that our treatments lower energy consumption by over 10% in buildings and industrial processes. Future validations include pilot projects with the EPA to test efficacy in hospital plumbing systems. (c) Algae cultivation and yeast fermentation: The U.S. Department of Energy (DOE) validated that Aequor's treatment boosted biomass of renewable feedstocks by 40% in half the time -- increasing the profitability of biofuels and biobased co-products. DOE also validated increased yields and crop protection of algae under cultivation in open ponds. A private oil and gas company validated decontamination of oilfield water. (3) New structural analogs. These kill Gram-negative and Gram-positive bacteria and fungi alone, in combinations with each other, and in combination with low doses of existing, ineffective antibiotics (including Penicillin), "potentiating" them to kill AMR pathogens at doses too low to trigger resistance. Both the U.S. National Institutes for Health (NIH) and Department of Defense (DOD) has executed contracts with Aequor to provide the pre-clinical trials needed for these new drug candidates to enter the regulatory approval pipelines. Aequor seeks partners/licensees to commercialize its specialty chemicals and support to evaluate the optimal methods to scale-up of several new structural analogs via activity-guided fractionation and/or biosynthesis in order to initiate the NIH and DOD pre-clinical trials.Keywords: biofilm, potentiation, prevention, removal
Procedia PDF Downloads 99371 Effect of Graded Level of Nano Selenium Supplementation on the Performance of Broiler Chicken
Authors: Raj Kishore Swain, Kamdev Sethy, Sumanta Kumar Mishra
Abstract:
Selenium is an essential trace element for the chicken with a variety of biological functions like growth, fertility, immune system, hormone metabolism, and antioxidant defense systems. Selenium deficiency in chicken causes exudative diathesis, pancreatic dystrophy and nutritional muscle dystrophy of the gizzard, heart and skeletal muscle. Additionally, insufficient immunity, lowering of production ability, decreased feathering of chickens and increased embryo mortality may occur due to selenium deficiency. Nano elemental selenium, which is bright red, highly stable, soluble and of nano meter size in the redox state of zero, has high bioavailability and low toxicity due to the greater surface area, high surface activity, high catalytic efficiency and strong adsorbing ability. To assess the effect of dietary nano-Se on performance and expression of gene in Vencobb broiler birds in comparison to its inorganic form (sodium selenite), four hundred fifty day-old Vencobb broiler chicks were randomly distributed into 9 dietary treatment groups with two replicates with 25 chicks per replicate. The dietary treatments were: T1 (Control group): Basal diet; T2: Basal diet with 0.3 ppm of inorganic Se; T3: Basal diet with 0.01875 ppm of nano-Se; T4: Basal diet with 0.0375 ppm of nano-Se; T5: Basal diet with 0.075 ppm of nano-Se, T6: Basal diet with 0.15 ppm of nano-Se, T7: Basal diet with 0.3 ppm of nano-Se, T8: Basal diet with 0.60 ppm of nano-Se, T9: Basal diet with 1.20 ppm of nano-Se. Nano selenium was synthesized by mixing sodium selenite with reduced glutathione and bovine serum albumin. The experiment was carried out in two phases: starter phase (0-3 wks), finisher phase (4-5 wk) in deep litter system. The body weight at the 5th week was best observed in T4. The best feed conversion ratio at the end of 5th week was observed in T4. Erythrocytic catalase, glutathione peroxidase and superoxide dismutase activity were significantly (P < 0.05) higher in all the nano selenium treated groups at 5th week. The antibody titers (log2) against Ranikhet diseases vaccine immunization of 5th-week broiler birds were significantly higher (P < 0.05) in the treatments T4 to T7. The selenium levels in liver, breast, kidney, brain, and gizzard were significantly (P < 0.05) increased with increasing dietary nano-Se indicating higher bioavailability of nano-Se compared to inorganic Se. The real time polymer chain reaction analysis showed an increase in the expression of antioxidative gene in T4 and T7 group. Therefore, it is concluded that supplementation of nano-selenium at 0.0375 ppm over and above the basal level can improve the body weight, antioxidant enzyme activity, Se bioavailability and expression of the antioxidative gene in broiler birds.Keywords: chicken, growth, immunity, nano selenium
Procedia PDF Downloads 177370 Lactic Acid Solution and Aromatic Vinegar Nebulization to Improve Hunted Wild Boar Carcass Hygiene at Game-Handling Establishment: Preliminary Results
Authors: Rossana Roila, Raffaella Branciari, Lorenzo Cardinali, David Ranucci
Abstract:
The wild boar (Sus scrofa) population has strongly increased across Europe in the last decades, also causing severe fauna management issues. In central Italy, wild boar is the main hunted wild game species, with approximately 40,000 animals killed per year only in the Umbria region. The meat of the game is characterized by high-quality nutritional value as well as peculiar taste and aroma, largely appreciated by consumers. This type of meat and products thereof can meet the current consumers’ demand for higher quality foodstuff, not only from a nutritional and sensory point of view but also in relation to environmental sustainability, the non-use of chemicals, and animal welfare. The game meat production chain is characterized by some gaps from a hygienic point of view: the harvest process is usually conducted in a wild environment where animals can be more easily contaminated during hunting and subsequent practices. The definition and implementation of a certified and controlled supply chain could ensure quality, traceability and safety for the final consumer and therefore promote game meat products. According to European legislation in some animal species, such as bovine, the use of weak acid solutions for carcass decontamination is envisaged in order to ensure the maintenance of optimal hygienic characteristics. A preliminary study was carried out to evaluate the applicability of similar strategies to control the hygienic level of wild boar carcasses. The carcasses, harvested according to the selective method and processed into the game-handling establishment, were treated by nebulization with two different solutions: a 2% food-grade lactic acid solution and aromatic vinegar. Swab samples were performed before treatment and in different moments after-treatment of the carcasses surfaces and subsequently tested for Total Aerobic Mesophilic Load, Total Aerobic Psychrophilic Load, Enterobacteriaceae, Staphylococcus spp. and lactic acid bacteria. The results obtained for the targeted microbial populations showed a positive effect of the application of the lactic acid solution on all the populations investigated, while aromatic vinegar showed a lower effect on bacterial growth. This study could lay the foundations for the optimization of the use of a lactic acid solution to treat wild boar carcasses aiming to guarantee good hygienic level and safety of meat.Keywords: game meat, food safety, process hygiene criteria, microbial population, microbial growth, food control
Procedia PDF Downloads 159369 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study
Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang
Abstract:
Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media
Procedia PDF Downloads 82368 The Negative Effects of Controlled Motivation on Mathematics Achievement
Authors: John E. Boberg, Steven J. Bourgeois
Abstract:
The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust
Procedia PDF Downloads 220367 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit
Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar
Abstract:
Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction
Procedia PDF Downloads 97366 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 179365 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27364 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches
Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl
Abstract:
This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.Keywords: sport development, high performance, mass participation, swimming
Procedia PDF Downloads 205363 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control
Authors: Mohamed Elmeshad
Abstract:
During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.Keywords: business, mass media, political economy, power, privatisation
Procedia PDF Downloads 227