Search results for: mitigation tubes
338 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015
Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas
Abstract:
A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions
Procedia PDF Downloads 198337 Post-Pandemic Public Space, Case Study of Public Parks in Kerala
Authors: Nirupama Sam
Abstract:
COVID-19, the greatest pandemic since the turn of the century, presents several issues for urban planners, the most significant of which is determining appropriate mitigation techniques for creating pandemic-friendly and resilient public spaces. The study is conducted in four stages. The first stage consisted of literature reviews to examine the evolution and transformation of public spaces during pandemics throughout history and the role of public spaces during pandemic outbreaks. The second stage is to determine the factors that influence the success of public spaces, which was accomplished by an analysis of current literature and case studies. The influencing factors are categorized under comfort and images, uses and activity, access and linkages, and sociability. The third stage is to establish the priority of identified factors for which a questionnaire survey of stakeholders is conducted and analyzing of certain factors with the help of GIS tools. COVID-19 has been in effect in India for the last two years. Kerala has the highest daily COVID-19 prevalence due to its high population density, making it more susceptible to viral outbreaks. Despite all preventive measures taken against COVID-19, Kerala remains the worst-affected state in the country. Finally, two live case studies of the hardest-hit localities, namely Subhash bose park and Napier Museum park in the Ernakulam and Trivandrum districts of Kerala, respectively, were chosen as study areas for the survey. The responses to the questionnaire were analyzed using SPSS for determining the weights of the influencing factors. The spatial success of the selected case studies was examined using the GIS interpolation model. Following the overall assessment, the fourth stage is to develop strategies and guidelines for planning public spaces to make them more efficient and robust, which further leads to improved quality, safety and resilience to future pandemics.Keywords: urban design, public space, covid-19, post-pandemic, public spaces
Procedia PDF Downloads 137336 Modelling Insider Attacks in Public Cloud
Authors: Roman Kulikov, Svetlana Kolesnikova
Abstract:
Last decade Cloud Computing technologies have been rapidly becoming ubiquitous. Each year more and more organizations, corporations, internet services and social networks trust their business sensitive information to Public Cloud. The data storage in Public Cloud is protected by security mechanisms such as firewalls, cryptography algorithms, backups, etc.. In this way, however, only outsider attacks can be prevented, whereas virtualization tools can be easily compromised by insider. The protection of Public Cloud’s critical elements from internal intruder remains extremely challenging. A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems (OS) to share a single hardware processor in Cloud Computing. One of the hypervisor's functions is to enforce access control policies. Furthermore, it prevents guest OS from disrupting each other and from accessing each other's memory or disk space. Hypervisor is the one of the most critical and vulnerable elements in Cloud Computing infrastructure. Nevertheless, it has been poorly protected from being compromised by insider. By exploiting certain vulnerabilities, privilege escalation can be easily achieved in insider attacks on hypervisor. In this way, an internal intruder, who has compromised one process, is able to gain control of the entire virtual machine. Thereafter, the consequences of insider attacks in Public Cloud might be more catastrophic and significant to virtual tools and sensitive data than of outsider attacks. So far, almost no preventive security countermeasures have been developed. There has been little attention paid for developing models to assist risks mitigation strategies. In this paper formal model of insider attacks on hypervisor is designed. Our analysis identifies critical hypervisor`s vulnerabilities that can be easily compromised by internal intruder. Consequently, possible conditions for successful attacks implementation are uncovered. Hence, development of preventive security countermeasures can be improved on the basis of the proposed model.Keywords: insider attack, public cloud, cloud computing, hypervisor
Procedia PDF Downloads 361335 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 152334 Experimental Simulations of Aerosol Effect to Landfalling Tropical Cyclones over Philippine Coast: Virtual Seeding Using WRF Model
Authors: Bhenjamin Jordan L. Ona
Abstract:
Weather modification is an act of altering weather systems that catches interest on scientific studies. Cloud seeding is a common form of weather alteration. On the same principle, tropical cyclone mitigation experiment follows the methods of cloud seeding with intensity to account for. This study will present the effects of aerosol to tropical cyclone cloud microphysics and intensity. The framework of Weather Research and Forecasting (WRF) model incorporated with Thompson aerosol-aware scheme is the prime host to support the aerosol-cloud microphysics calculations of cloud condensation nuclei (CCN) ingested into the tropical cyclones before making landfall over the Philippine coast. The coupled microphysical and radiative effects of aerosols will be analyzed using numerical data conditions of Tropical Storm Ketsana (2009), Tropical Storm Washi (2011), and Typhoon Haiyan (2013) associated with varying CCN number concentrations per simulation per typhoon: clean maritime, polluted, and very polluted having 300 cm-3, 1000 cm-3, and 2000 cm-3 aerosol number initial concentrations, respectively. Aerosol species like sulphates, sea salts, black carbon, and organic carbon will be used as cloud nuclei and mineral dust as ice nuclei (IN). To make the study as realistic as possible, investigation during the biomass burning due to forest fire in Indonesia starting October 2015 as Typhoons Mujigae/Kabayan and Koppu/Lando had been seeded with aerosol emissions mainly comprises with black carbon and organic carbon, will be considered. Emission data that will be used is from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The physical mechanism/s of intensification or deintensification of tropical cyclones will be determined after the seeding experiment analyses.Keywords: aerosol, CCN, IN, tropical cylone
Procedia PDF Downloads 296333 Mitigation of Profitable Problems: Level of Hotel Quality Management Program and Environmental Management Practices Towards Performance
Authors: Siti Anis Nadia Abu Bakar, Vani Tanggamani
Abstract:
Over recent years, the quality and environmental management practices are the necessary tasks in hospitality industry in order to provide high quality services, a comfortable and safe environment for occupants as well as innovative nature and shareholders' satisfaction, its environmental and social added value sustainable. Numerous studies have observed and measured quality management program (QMProg) and environmental management practices (EMPrac) independently. This paper analyzed the level of QMProg, and EMPrac in hospitality industry, particularly on hotel performance, specifically in the context of Malaysia as hotel industry in Malaysia has contributed tremendously to the development in the Malaysia tourism industry.The research objectives are; (1) to analyze how the level of QMProg influences on firm performance; (2) to investigate the level of EMPrac and its influence on firm performance. This paper contributes to the literature by providing added-value to the service industry strategic decision-making processes by helping to predict the varying impacts of positive and negative corporate social responsibility (CSR) activities on financial performance in their respective industries. Further, this paper also contributes to develop more applicable CSR strategies. As a matter of fact, the findings of this paper has contributed towards an integrated management system that will assist a firm in implementation of their environmental strategy by creating a higher level of accountability for environmental performance. The best results in environmental systems have instigated managers to explore more options when dealing with problems, especially problems involving the reputation of their hotel. In conclusion, the results of the study infer that the best CSR strategies of the quality and environmental management practices influences hotel performance.Keywords: corporate social responsibility (CSR), environmental management practices (EMPrac), performance (PERF), quality management program (QMProg)
Procedia PDF Downloads 374332 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters
Procedia PDF Downloads 197331 Rights-Based Approach to Artificial Intelligence Design: Addressing Harm through Participatory ex ante Impact Assessment
Authors: Vanja Skoric
Abstract:
The paper examines whether the impacts of artificial intelligence (AI) can be meaningfully addressed through the rights-based approach to AI design, investigating in particular how the inclusive, participatory process of assessing the AI impact would make this viable. There is a significant gap between envisioning rights-based AI systems and their practical application. Plausibly, internalizing human rights approach within AI design process might be achieved through identifying and assessing implications of AI features human rights, especially considering the case of vulnerable individuals and communities. However, there is no clarity or consensus on how such an instrument should be operationalised to usefully identify the impact, mitigate harms and meaningfully ensure relevant stakeholders’ participation. In practice, ensuring the meaningful inclusion of those individuals, groups, or entire communities who are affected by the use of the AI system is a prerequisite for a process seeking to assess human rights impacts and risks. Engagement in the entire process of the impact assessment should enable those affected and interested to access information and better understand the technology, product, or service and resulting impacts, but also to learn about their rights and the respective obligations and responsibilities of developers and deployers to protect and/or respect these rights. This paper will provide an overview of the study and practice of the participatory design process for AI, including inclusive impact assessment, its main elements, propose a framework, and discuss the lessons learned from the existing theory. In addition, it will explore pathways for enhancing and promoting individual and group rights through such engagement by discussing when, how, and whom to include, at which stage of the process, and what are the pre-requisites for meaningful and engaging. The overall aim is to ensure using the technology that works for the benefit of society, individuals, and particular (historically marginalised) groups.Keywords: rights-based design, AI impact assessment, inclusion, harm mitigation
Procedia PDF Downloads 150330 Numerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Study
Authors: Amit Kumar
Abstract:
Accurate identification of deteriorated air quality regions is very helpful in devising better environmental practices and mitigation efforts. In the present study, an attempt has been made to identify the air pollutant dispersion patterns especially NOX due to vehicular and industrial sources over a rapidly developing urban city, Visakhapatnam (17°42’ N, 83°20’ E), India, during April 2009. Using the emission factors of different vehicles as well as the industry, a high resolution 1 km x 1 km gridded emission inventory has been developed for Visakhapatnam city. A dispersion model AERMOD with explicit representation of planetary boundary layer (PBL) dynamics and offline coupled through a developed coupler mechanism with a high resolution mesoscale model WRF-ARW resolution for simulating the dispersion patterns of NOX is used in the work. The meteorological as well as PBL parameters obtained by employing two PBL schemes viz., non-local Yonsei University (YSU) and local Mellor-Yamada-Janjic (MYJ) of WRF-ARW model, which are reasonably representing the boundary layer parameters are considered for integrating AERMOD. Significantly different dispersion patterns of NOX have been noticed between summer and winter months. The simulated NOX concentration is validated with available six monitoring stations of Central Pollution Control Board, India. Statistical analysis of model evaluated concentrations with the observations reveals that WRF-ARW of YSU scheme with AERMOD has shown better performance. The deteriorated air quality locations are identified over Visakhapatnam based on the validated model simulations of NOX concentrations. The present study advocates the utility of tNumerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Studyhe developed gridded emission inventory of NOX with coupled WRF-AERMOD modeling system for air quality assessment over the study region.Keywords: WRF-ARW, AERMOD, planetary boundary layer, air quality
Procedia PDF Downloads 280329 Endoscopic Stenting of the Main Pancreatic Duct in Patients With Pancreatic Fluid Collections After Pancreas Transplantation
Authors: Y. Teterin, S. Suleymanova, I. Dmitriev, P. Yartcev
Abstract:
Introduction: One of the most common complications after pancreas transplantation are pancreatic fluid collections (PFCs), which are often complicated not only by infection and subsequent disfunction of the pancreatoduodenal graft (PDG), but also with a rather high mortality rate of recipients. Drainage is not always effective and often requires repeated open surgical interventions, which worsens the outcome of the surgery. Percutaneous drainage of PFCs combined with endoscopic stenting of the main pancreatic duct of the pancreatoduodenal graft (MPDPDG) showed high efficiency in the treatment of PFCs. Aims & Methods: From 01.01.2012 to 31.12.2021 at the Sklifosovsky Research Institute for Emergency Medicine were performed 64 transplantations of PDG. In 11 cases (17.2%), the early postoperative period was complicated by the formation of PFCs. Of these, 7 patients underwent percutaneous drainage of pancreonecrosis with high efficiency and did not required additional methods of treatment. In the remaining 4 patients, drainage was ineffective and was an indication for endoscopic stenting of the MPDPDG. They were the ones who made up the study group. Among them were 3 men and 1 woman. The mean age of the patients was 36,4 years.PFCs in these patients formed on days 1, 12, 18, and 47 after PDG transplantation. We used a gastroscope to stent the MPDPDG, due to anatomical features of the location of the duodenoduodenal anastomosis after PDG transplantation. Through the endoscope channel was performed selective catheterization of the MPDPDG, using a catheter and a guidewire, followed by its contrasting with a water-soluble contrast agent. Due to the extravasation of the contrast, was determined the localization of the defect in the PDG duct system. After that, a plastic pancreatic stent with a diameter of 7 Fr. and a length of 7 cm. was installed along guidewire. The stent was installed in such a way that its proximal edge completely covered the defect zone, and the distal one was determined in the intestinal lumen. Results: In all patients PDG pancreaticography revealed extravasation of a contrast in the area of the isthmus and body of the pancreas, which required stenting of the MPDPDG. In 1 (25%) case, the patient had a dislocation of the stent into the intestinal lumen (III degree according to Clavien-Dindo (2009)). This patient underwent repeated endoscopic stenting of the MPDPDG. On average 23 days after endoscopic stenting of the MPDPDG, the drainage tubes were removed and after approximately 40 days all patients were discharged in a satisfactory condition with follow-up endocrinologist and surgeon consultation. Pancreatic stents were removed after 6 months ± 7 days. Conclusion: Endoscopic stenting of the main pancreatic duct of the donor pancreas is by far the most highly effective and minimally invasive method in the treatment of PFCs after transplantation of the pancreatoduodenal complex.Keywords: pancreas transplantation, endoscopy surgery, diabetes, stenting, main pancreatic duct
Procedia PDF Downloads 86328 Transient Simulation Using SPACE for ATLAS Facility to Investigate the Effect of Heat Loss on Major Parameters
Authors: Suhib A. Abu-Seini, Kyung-Doo Kim
Abstract:
A heat loss model for ATLAS facility was introduced using SPACE code predefined correlations and various dialing factors. As all previous simulations were carried out using a heat loss free input; the facility was considered to be completely insulated and the core power was reduced by the experimentally measured values of heat loss to compensate to the account for the loss of heat, this study will consider heat loss throughout the simulation. The new heat loss model will be affecting SPACE code simulation as heat being leaked out of the system throughout a transient will alter many parameters corresponding to temperature and temperature difference. For that, a Station Blackout followed by a multiple Steam Generator Tube Rupture accident will be simulated using both the insulated system approach and the newly introduced heat loss input of the steady state. Major parameters such as system temperatures, pressure values, and flow rates to be put into comparison and various analysis will be suggested upon it as the experimental values will not be the reference to validate the expected outcome. This study will not only show the significance of heat loss consideration in the processes of prevention and mitigation of various incidents, design basis and beyond accidents as it will give a detailed behavior of ATLAS facility during both processes of steady state and major transient, but will also present a verification of how credible the data acquired of ATLAS are; since heat loss values for steady state were already mismatched between SPACE simulation results and ATLAS data acquiring system. Acknowledgement- This work was supported by the Korean institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea.Keywords: ATLAS, heat loss, simulation, SPACE, station blackout, steam generator tube rupture, verification
Procedia PDF Downloads 224327 Beta-Carotene Attenuates Cognitive and Hepatic Impairment in Thioacetamide-Induced Rat Model of Hepatic Encephalopathy via Mitigation of MAPK/NF-κB Signaling Pathway
Authors: Marawan Abd Elbaset Mohamed, Hanan A. Ogaly, Rehab F. Abdel-Rahman, Ahmed-Farid O.A., Marwa S. Khattab, Reham M. Abd-Elsalam
Abstract:
Liver fibrosis is a severe worldwide health concern due to various chronic liver disorders. Hepatic encephalopathy (HE) is one of its most common complications affecting liver and brain cognitive function. Beta-Carotene (B-Car) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. The study attempted to know B-Car neuroprotective potential against thioacetamide (TAA)-induced neurotoxicity and cognitive decline in HE in rats. Hepatic encephalopathy was induced by TAA (100 mg/kg, i.p.) three times per week for two weeks. B-Car was given orally (10 or 20 mg/kg) daily for two weeks after TAA injections. Organ body weight ratio, Serum transaminase activities, liver’s antioxidant parameters, ammonia, and liver histopathology were assessed. Also, the brain’s mitogen-activated protein kinase (MAPK), nuclear factor kappa B (NF-κB), antioxidant parameters, adenosine triphosphate (ATP), adenosine monophosphate (AMP), norepinephrine (NE), dopamine (DA), serotonin (5-HT), 5-hydroxyindoleacetic acid (5-HIAA) cAMP response element-binding protein (CREB) expression and B-cell lymphoma 2 (Bcl-2) expression were measured. The brain’s cognitive functions (Spontaneous locomotor activity, Rotarod performance test, Object recognition test) were assessed. B-Car prevented alteration of the brain’s cognitive function in a dose-dependent manner. The histopathological outcomes supported these biochemical evidences. Based on these results, it could be established that B-Car could be assigned to treat the brain’s neurotoxicity consequences of HE via downregualtion of MAPK/NF-κB signaling pathways.Keywords: beta-carotene, liver injury, MAPK, NF-κB, rat, thioacetamide
Procedia PDF Downloads 154326 The Diverse and Flexible Coping Strategies Simulation for Maanshan Nuclear Power Plant
Authors: Chin-Hsien Yeh, Shao-Wen Chen, Wen-Shu Huang, Chun-Fu Huang, Jong-Rong Wang, Jung-Hua Yang, Yuh-Ming Ferng, Chunkuan Shih
Abstract:
In this research, a Fukushima-like conditions is simulated with TRACE and RELAP5. Fukushima Daiichi Nuclear Power Plant (NPP) occurred the disaster which caused by the earthquake and tsunami. This disaster caused extended loss of all AC power (ELAP). Hence, loss of ultimate heat sink (LUHS) happened finally. In order to handle Fukushima-like conditions, Taiwan Atomic Energy Council (AEC) commanded that Taiwan Power Company should propose strategies to ensure the nuclear power plant safety. One of the diverse and flexible coping strategies (FLEX) is a different water injection strategy. It can execute core injection at 20 Kg/cm2 without depressurization. In this study, TRACE and RELAP5 were used to simulate Maanshan nuclear power plant, which is a three loops PWR in Taiwan, under Fukushima-like conditions and make sure the success criteria of FLEX. Reducing core cooling ability is due to failure of emergency core cooling system (ECCS) in extended loss of all AC power situation. The core water level continues to decline because of the seal leakage, and then FLEX is used to save the core water level and make fuel rods covered by water. The result shows that this mitigation strategy can cool the reactor pressure vessel (RPV) as soon as possible under Fukushima-like conditions, and keep the core water level higher than Top of Active Fuel (TAF). The FLEX can ensure the peak cladding temperature (PCT) below than the criteria 1088.7 K. Finally, the FLEX can provide protection for nuclear power plant and make plant safety.Keywords: TRACE, RELAP5/MOD3.3, ELAP, FLEX
Procedia PDF Downloads 250325 Impact of Urbanization on Natural Drainage Pattern in District of Larkana, Sindh Pakistan
Authors: Sumaira Zafar, Arjumand Zaidi
Abstract:
During past few years, several floods have adversely affected the areas along lower Indus River. Besides other climate related anomalies, rapidly increasing urbanization and blockage of natural drains due to siltation or encroachments are two other critical causes that may be responsible for these disasters. Due to flat topography of river Indus plains and blockage of natural waterways, drainage of storm water takes time adversely affecting the crop health and soil properties of the area. Government of Sindh is taking a keen interest in revival of natural drainage network in the province and has initiated this work under Sindh Irrigation and Drainage Authority. In this paper, geospatial techniques are used to analyze landuse/land-cover changes of Larkana district over the past three decades (1980-present) and their impact on natural drainage system. Satellite derived Digital Elevation Model (DEM) and topographic sheets (recent and 1950) are used to delineate natural drainage pattern of the district. The urban landuse map developed in this study is further overlaid on drainage line layer to identify the critical areas where the natural floodwater flows are being inhibited by urbanization. Rainfall and flow data are utilized to identify areas of heavy flow, whereas, satellite data including Landsat 7 and Google Earth are used to map previous floods extent and landuse/cover of the study area. Alternatives to natural drainage systems are also suggested wherever possible. The output maps of natural drainage pattern can be used to develop a decision support system for urban planners, Sindh development authorities and flood mitigation and management agencies.Keywords: geospatial techniques, satellite data, natural drainage, flood, urbanization
Procedia PDF Downloads 508324 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia
Authors: Fikiru Temesgen Gelata
Abstract:
Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia
Procedia PDF Downloads 34323 Effectiveness of Climate Smart Agriculture in Managing Field Stresses in Robusta Coffee
Authors: Andrew Kirabira
Abstract:
This study is an investigation into the effectiveness of climate-smart agriculture (CSA) technologies in improving productivity through managing biotic and abiotic stresses in the coffee agroecological zones of Uganda. The motive is to enhance farmer livelihoods. The study was initiated as a result of the decreasing productivity of the crop in Uganda caused by the increasing prevalence of pests, diseases and abiotic stresses. Despite 9 years of farmers’ application of CSA, productivity has stagnated between 700kg -800kg/ha/yr which is only 26% of the 3-5tn/ha/yr that CSA is capable of delivering if properly applied. This has negatively affected the incomes of the 10.6 million people along the crop value chain which has in essence affected the country’s national income. In 2019/20 FY for example, Uganda suffered a deficit of $40m out of singularly the increasing incidence of one pest; BCTB. The amalgamation of such trends cripples the realization of SDG #1 and #13 which are the eradication of poverty and mitigation of climate change, respectively. In probing CSA’s effectiveness in curbing such a trend, this study is guided by the objectives of; determining the existing farmers’ knowledge and perceptions of CSA amongst the coffee farmers in the diverse coffee agro-ecological zones of Uganda; examining the relationship between the use of CSA and prevalence of selected coffee pests, diseases and abiotic stresses; ascertaining the difference in the market organization and pricing between conventionally and CSA produced coffee; and analyzing the prevailing policy environment concerning the use of CSA in coffee production. The data collection research design is descriptive in nature; collecting data from farmers and agricultural extension workers in the districts of Ntungamo, Iganga and Luweero; each of these districts representing a distinct coffee agroecological zone. Policy custodian officers at district, cooperatives and at the crop’s overseeing national authority were also interviewed.Keywords: climate change, food security, field stresses, Productivity
Procedia PDF Downloads 57322 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes
Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil
Abstract:
Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles
Procedia PDF Downloads 291321 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery
Procedia PDF Downloads 172320 Silymarin Reverses Scopolamine-Induced Memory Deficit in Object Recognition Test in Rats: A Behavioral, Biochemical, Histopathological and Immunohistochemical Study
Authors: Salma A. El-Marasy, Reham M. Abd-Elsalam, Omar A. Ahmed-Farid
Abstract:
Dementia is characterized by impairments in memory and other cognitive abilities. This study aims to elucidate the possible ameliorative effect of silymarin on scopolamine-induced dementia using the object recognition test (ORT). The study was extended to demonstrate the role of cholinergic activity, oxidative stress, neuroinflammation, brain neurotransmitters and histopathological changes in the anti-amnestic effect of silymarin in demented rats. Wistar rats were pretreated with silymarin (200, 400, 800 mg/kg) or donepezil (10 mg/kg) orally for 14 consecutive days. Dementia was induced after the last drug administration by a single intraperitoneal dose of scopolamine (16 mg/kg). Then behavioral, biochemical, histopathological, and immunohistochemical analyses were then performed. Rats pretreated with silymarin counteracted scopolamine-induced non-spatial working memory impairment in the ORT and decreased acetylcholinesterase (AChE) activity, reduced malondialdehyde (MDA), elevated reduced glutathione (GSH), restored gamma-aminobutyric acid (GABA) and dopamine (DA) contents in the cortical and hippocampal brain homogenates. Silymarin dose-dependently reversed scopolamine-induced histopathological changes. Immunohistochemical analysis showed that silymarin dose-dependently mitigated protein expression of a glial fibrillary acidic protein (GFAP) and nuclear factor kappa-B (NF-κB) in the brain cortex and hippocampus. All these effects of silymarin were similar to that of the standard anti-amnestic drug, donepezil. This study reveals that the ameliorative effect of silymarin on scopolamine-induced dementia in rats using the ORT maybe in part mediated by, enhancement of cholinergic activity, anti-oxidant and anti-inflammatory activities as well as mitigation in brain neurotransmitters and histopathological changes.Keywords: dementia, donepezil, object recognition test, rats, silymarin, scopolamine
Procedia PDF Downloads 138319 Safeguarding the Construction Industry: Interrogating and Mitigating Emerging Risks from AI in Construction
Authors: Abdelrhman Elagez, Rolla Monib
Abstract:
This empirical study investigates the observed risks associated with adopting Artificial Intelligence (AI) technologies in the construction industry and proposes potential mitigation strategies. While AI has transformed several industries, the construction industry is slowly adopting advanced technologies like AI, introducing new risks that lack critical analysis in the current literature. A comprehensive literature review identified a research gap, highlighting the lack of critical analysis of risks and the need for a framework to measure and mitigate the risks of AI implementation in the construction industry. Consequently, an online survey was conducted with 24 project managers and construction professionals, possessing experience ranging from 1 to 30 years (with an average of 6.38 years), to gather industry perspectives and concerns relating to AI integration. The survey results yielded several significant findings. Firstly, respondents exhibited a moderate level of familiarity (66.67%) with AI technologies, while the industry's readiness for AI deployment and current usage rates remained low at 2.72 out of 5. Secondly, the top-ranked barriers to AI adoption were identified as lack of awareness, insufficient knowledge and skills, data quality concerns, high implementation costs, absence of prior case studies, and the uncertainty of outcomes. Thirdly, the most significant risks associated with AI use in construction were perceived to be a lack of human control (decision-making), accountability, algorithm bias, data security/privacy, and lack of legislation and regulations. Additionally, the participants acknowledged the value of factors such as education, training, organizational support, and communication in facilitating AI integration within the industry. These findings emphasize the necessity for tailored risk assessment frameworks, guidelines, and governance principles to address the identified risks and promote the responsible adoption of AI technologies in the construction sector.Keywords: risk management, construction, artificial intelligence, technology
Procedia PDF Downloads 98318 Low-Impact Development Strategies Assessment for Urban Design
Abstract:
Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.Keywords: low-impact development, urban design, flooding, density measures
Procedia PDF Downloads 334317 Design, Simulation and Fabrication of Electro-Magnetic Pulse Welding Coil and Initial Experimentation
Authors: Bharatkumar Doshi
Abstract:
Electro-Magnetic Pulse Welding (EMPW) is a solid state welding process carried out at almost room temperature, in which joining is enabled by high impact velocity deformation. In this process, high voltage capacitor’s stored energy is discharged in an EM coil resulting in a damped, sinusoidal current with an amplitude of several hundred kiloamperes. Due to these transient magnetic fields of few tens of Tesla near the coil is generated. As the conductive (tube) part is positioned in this area, an opposing eddy current is induced in this part. Consequently, high Lorentz forces act on the part, leading to acceleration away from the coil. In case of a tube, it gets compressed under forming velocities of more than 300 meters per second. After passing the joining gap it collides with the second metallic joining rod, leading to the formation of a jet under appropriate collision conditions. Due to the prevailing high pressure, metallurgical bonding takes place. A characteristic feature is the wavy interface resulting from the heavy plastic deformations. In the process, the formation of intermetallic compounds which might deteriorate the weld strength can be avoided, even for metals with dissimilar thermal properties. In order to optimize the process parameters like current, voltage, inductance, coil dimensions, workpiece dimensions, air gap, impact velocity, effective plastic strain, shear stress acting in the welding zone/impact zone etc. are very critical and important to establish. These process parameters could be determined by simulation using Finite Element Methods (FEM) in which electromagnetic –structural couple field analysis is performed. The feasibility of welding could thus be investigated by varying the parameters in the simulation using COMSOL. Simulation results shall be applied in performing the preliminary experiments of welding the different alloy steel tubes and/or alloy steel to other materials. The single turn coil (S.S.304) with field shaper (copper) has been designed and manufactured. The preliminary experiments are performed using existing EMPW facility available Institute for Plasma Research, Gandhinagar, India. The experiments are performed at 22kV charged into 64µF capacitor bank and the energy is discharged into single turn EM coil. Welding of axi-symetric components such as aluminum tube and rod has been proven experimentally using EMPW techniques. In this paper EM coil design, manufacturing, Electromagnetic-structural FEM simulation of Magnetic Pulse Welding and preliminary experiment results is reported.Keywords: COMSOL, EMPW, FEM, Lorentz force
Procedia PDF Downloads 184316 Exploring Community Benefits Frameworks as a Tool for Addressing Intersections of Equity and the Green Economy in Toronto's Urban Development
Authors: Cheryl Teelucksingh
Abstract:
Toronto is in the midst of an urban development and infrastructure boom. Population growth and concerns about urban sprawl and carbon emissions have led to pressure on the municipal and the provincial governments to re-think urban development. Toronto’s approach to climate change mitigation and adaptation has positioning of the emerging green economy as part of the solution. However, the emerging green economy many not benefit all Torontonians in terms of jobs, improved infrastructure, and enhanced quality of life. Community benefits agreements (CBAs) are comprehensive, negotiated commitments, in which founders and builders of major infrastructure projects formally agree to work with community interest groups based in the community where the development is taking place, toward mutually beneficial environmental and labor market outcomes. When community groups are equitably represented in the process, they stand not only to benefit from the jobs created from the project itself, but also from the longer-term community benefits related to the quality of the completed work, including advocating for communities’ environmental needs. It is believed that green employment initiatives in Toronto should give greater consideration to best practices learned from community benefits agreements. Drawing on the findings of a funded qualitative study in Toronto (Canada), “The Green Gap: Toward Inclusivity in Toronto’s Green Economy” (2013-2016), this paper examines the emergent CBA in Toronto in relation to the development of a light rail transit project. Theoretical and empirical consideration will be given to the research gaps around CBAs, the role of various stakeholders, and discuss the potential for CBAs to gain traction in the Toronto’s urban development context. The narratives of various stakeholders across Toronto’s green economy will be interwoven with a discussion of the CBA model in Toronto and other jurisdictions.Keywords: green economy in Toronto, equity, community benefits agreements, environmental justice, community sustainability
Procedia PDF Downloads 342315 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 268314 Practices of Waterwise Circular Economy in Water Protection: A Case Study on Pyhäjärvi, SW Finland
Authors: Jari Koskiaho, Teija Kirkkala, Jani Salminen, Sarianne Tikkanen, Sirkka Tattari
Abstract:
Here, phosphorus (P) loading to the lake Pyhäjärvi (SW Finland) was reviewed, load reduction targets were determined, and different measures of waterwise circular economy to reach the targets were evaluated. In addition to the P loading from the lake’s catchment, there is a significant amount of internal P loading occurring in the lake. There are no point source emissions into the lake. Thus, the most important source of external nutrient loading is agriculture. According to the simulations made with LLR-model, the chemical state of the lake is at the border of the classes ‘Satisfactory’ and ‘Good’. The LLR simulations suggest that a reduction of some hundreds of kilograms in annual P loading would be needed to reach an unquestionably ‘Good’ state. Evaluation of the measures of the waterwise circular economy suggested that they possess great potential in reaching the target P load reduction. If they were applied extensively and in a versatile, targeted manner in the catchment, their combined effect would reach the target reduction. In terms of cost-effectiveness, the waterwise measures were ranked as follows: The best: Fishing, 2nd best: Recycling of vegetation of reed beds, wetlands and buffer zones, 3rd best: Recycling field drainage waters stored in wetlands and ponds for irrigation, 4th best: Controlled drainage and irrigation, and 5th best: Recycling of the sediments of wetlands and ponds for soil enrichment. We also identified various waterwise nutrient recycling measures to decrease the P content of arable land. The cost-effectiveness of such measures may be very good. Solutions are needed to Finnish water protection in general, and particularly for regions like lake Pyhäjärvi catchment with intensive domestic animal production, of which the ‘P-hotspots’ are a crucial issue.Keywords: circular economy, lake protection, mitigation measures, phosphorus
Procedia PDF Downloads 106313 Earth Observations and Hydrodynamic Modeling to Monitor and Simulate the Oil Pollution in the Gulf of Suez, Red Sea, Egypt
Authors: Islam Abou El-Magd, Elham Ali, Moahmed Zakzouk, Nesreen Khairy, Naglaa Zanaty
Abstract:
Maine environment and coastal zone are wealthy with natural resources that contribute to the local economy of Egypt. The Gulf of Suez and Red Sea area accommodates diverse human activities that contribute to the local economy, including oil exploration and production, touristic activities, export and import harbors, etc, however, it is always under the threat of pollution due to human interaction and activities. This research aimed at integrating in-situ measurements and remotely sensed data with hydrodynamic model to map and simulate the oil pollution. High-resolution satellite sensors including Sentinel 2 and Plantlab were functioned to trace the oil pollution. Spectral band ratio of band 4 (infrared) over band 3 (red) underpinned the mapping of the point source pollution from the oil industrial estates. This ratio is supporting the absorption windows detected in the hyperspectral profiles. ASD in-situ hyperspectral device was used to measure experimentally the oil pollution in the marine environment. The experiment used to measure water behavior in three cases a) clear water without oil, b) water covered with raw oil, and c) water after a while from throwing the raw oil. The spectral curve is clearly identified absorption windows for oil pollution, particularly at 600-700nm. MIKE 21 model was applied to simulate the dispersion of the oil contamination and create scenarios for crises management. The model requires precise data preparation of the bathymetry, tides, waves, atmospheric parameters, which partially obtained from online modeled data and other from historical in-situ stations. The simulation enabled to project the movement of the oil spill and could create a warning system for mitigation. Details of the research results will be described in the paper.Keywords: oil pollution, remote sensing, modelling, Red Sea, Egypt
Procedia PDF Downloads 347312 Study of Radiation Response in Lactobacillus Species
Authors: Kanika Arora, Madhu Bala
Abstract:
The small intestine epithelium is highly sensitive and major targets of ionizing radiation. Radiation causes gastrointestinal toxicity either by direct deposition of energy or indirectly (inflammation or bystander effects) generating free radicals and reactive oxygen species. Oxidative stress generated as a result of radiation causes active inflammation within the intestinal mucosa leading to structural and functional impairment of gut epithelial barrier. As a result, there is a loss of tolerance to normal dietary antigens and commensal flora together with exaggerated response to pathogens. Dysbiosis may therefore thought to play a role in radiation enteropathy and can contribute towards radiation induced bowel toxicity. Lactobacilli residing in the gut shares a long conjoined evolutionary history with their hosts and by doing so these organisms have developed an intimate and complex symbiotic relationships. The objective behind this study was to look for the strains with varying resistance to ionizing radiation and to see whether the niche of the bacteria is playing any role in radiation resistance property of bacteria. In this study, we have isolated the Lactobacillus spp. from probiotic preparation and murine gastrointestinal tract, both of which were supposed to be the important source for its isolation. Biochemical characterization did not show a significant difference in the properties, while a significant preference was observed in carbohydrate utilization capacity by the isolates. Effect of ionizing radiations induced by Co60 gamma radiation (10 Gy) on lactobacilli cells was investigated. A cellular survival curve versus absorbed doses was determined. Radiation resistance studies showed that the response of isolates towards cobalt-60 gamma radiation differs from each other and significant decrease in survival was observed in a dose-dependent manner. Thus the present study revealed that the property of radioresistance in Lactobacillus depends upon the source from where they have been isolated.Keywords: dysbiosis, lactobacillus, mitigation, radiation
Procedia PDF Downloads 137311 Ports and Airports: Gateways to Vector-Borne Diseases in Portugal Mainland
Authors: Maria C. Proença, Maria T. Rebelo, Maria J. Alves, Sofia Cunha
Abstract:
Vector-borne diseases are transmitted to humans by mosquitos, sandflies, bugs, ticks, and other vectors. Some are re-transmitted between vectors, if the infected human has a new contact when his levels of infection are high. The vector is infected for lifetime and can transmit infectious diseases not only between humans but also from animals to humans. Some vector borne diseases are very disabling and globally account for more than one million deaths worldwide. The mosquitoes from the complex Culex pipiens sl. are the most abundant in Portugal, and we dispose in this moment of a data set from the surveillance program that has been carried on since 2006 across the country. All mosquitos’ species are included, but the large coverage of Culex pipiens sl. and its importance for public health make this vector an interesting candidate to assess risk of disease amplification. This work focus on ports and airports identified as key areas of high density of vectors. Mosquitoes being ectothermic organisms, the main factor for vector survival and pathogen development is temperature. Minima and maxima local air temperatures for each area of interest are averaged by month from data gathered on a daily basis at the national network of meteorological stations, and interpolated in a geographic information system (GIS). The range of temperatures ideal for several pathogens are known and this work shows how to use it with the meteorological data in each port and airport facility, to focus an efficient implementation of countermeasures and reduce simultaneously risk transmission and mitigation costs. The results show an increased alert with decreasing latitude, which corresponds to higher minimum and maximum temperatures and a lower amplitude range of the daily temperature.Keywords: human health, risk assessment, risk management, vector-borne diseases
Procedia PDF Downloads 418310 Indigenous Understandings of Climate Vulnerability in Chile: A Qualitative Approach
Authors: Rosario Carmona
Abstract:
This article aims to discuss the importance of indigenous people participation in climate change mitigation and adaptation. Specifically, it analyses different understandings of climate vulnerability among diverse actors involved in climate change policies in Chile: indigenous people, state officials, and academics. These data were collected through participant observation and interviews conducted during October 2017 and January 2019 in Chile. Following Karen O’Brien, there are two types of vulnerability, outcome vulnerability and contextual vulnerability. How vulnerability to climate change is understood determines the approach, which actors are involved and which knowledge is considered to address it. Because climate change is a very complex phenomenon, it is necessary to transform the institutions and their responses. To do so, it is fundamental to consider these two perspectives and different types of knowledge, particularly those of the most vulnerable, such as indigenous people. For centuries and thanks to a long coexistence with the environment, indigenous societies have elaborated coping strategies, and some of them are already adapting to climate change. Indigenous people from Chile are not an exception. But, indigenous people tend to be excluded from decision-making processes. And indigenous knowledge is frequently seen as subjective and arbitrary in relation to science. Nevertheless, last years indigenous knowledge has gained particular relevance in the academic world, and indigenous actors are getting prominence in international negotiations. There are some mechanisms that promote their participation (e.g., Cancun safeguards, World Bank operational policies, REDD+), which are not absent from difficulties. And since 2016 parties are working on a Local Communities and Indigenous Peoples Platform. This paper also explores the incidence of this process in Chile. Although there is progress in the participation of indigenous people, this participation responds to the operational policies of the funding agencies and not to a real commitment of the state with this sector. The State of Chile omits a review of the structure that promotes inequality and the exclusion of indigenous people. In this way, climate change policies could be configured as a new mechanism of coloniality that validates a single type of knowledge and leads to new territorial control strategies, which increases vulnerability.Keywords: indigenous knowledge, climate change, vulnerability, Chile
Procedia PDF Downloads 126309 Enhancing Sewage Sludge Management through Integrated Hydrothermal Liquefaction and Anaerobic Digestion: A Comparative Study
Authors: Harveen Kaur Tatla, Parisa Niknejad, Rajender Gupta, Bipro Ranjan Dhar, Mohd. Adana Khan
Abstract:
Sewage sludge management presents a pressing challenge in the realm of wastewater treatment, calling for sustainable and efficient solutions. This study explores the integration of Hydrothermal Liquefaction (HTL) and Anaerobic Digestion (AD) as a promising approach to address the complexities associated with sewage sludge treatment. The integration of these two processes offers a complementary and synergistic framework, allowing for the mitigation of inherent limitations, thereby enhancing overall efficiency, product quality, and the comprehensive utilization of sewage sludge. In this research, we investigate the optimal sequencing of HTL and AD within the treatment framework, aiming to discern which sequence, whether HTL followed by AD or AD followed by HTL, yields superior results. We explore a range of HTL working temperatures, including 250°C, 300°C, and 350°C, coupled with residence times of 30 and 60 minutes. To evaluate the effectiveness of each sequence, a battery of tests is conducted on the resultant products, encompassing Total Ammonia Nitrogen (TAN), Chemical Oxygen Demand (COD), and Volatile Fatty Acids (VFA). Additionally, elemental analysis is employed to determine which sequence maximizes energy recovery. Our findings illuminate the intricate dynamics of HTL and AD integration for sewage sludge management, shedding light on the temperature-residence time interplay and its impact on treatment efficiency. This study not only contributes to the optimization of sewage sludge treatment but also underscores the potential of integrated processes in sustainable waste management strategies. The insights gleaned from this research hold promise for advancing the field of wastewater treatment and resource recovery, addressing critical environmental and energy challenges.Keywords: Anaerobic Digestion (AD), aqueous phase, energy recovery, Hydrothermal Liquefaction (HTL), sewage sludge management, sustainability.
Procedia PDF Downloads 80