Search results for: detection and estimation
957 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)
Authors: Silvia Arrate, Waldo Salud, Eloy París
Abstract:
The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.Keywords: cutting tools, data science, prediction, TBM, wear
Procedia PDF Downloads 46956 The Economic Burden of Breast Cancer on Women in Nigeria: Implication for Socio-Economic Development
Authors: Tolulope Allo, Mofoluwake P. Ajayi, Adenike E. Idowu, Emmanuel O. Amoo, Fadeke Esther Olu-Owolabi
Abstract:
Breast cancer which was more prevalent in Europe and America in the past is gradually being mirrored across the world today with greater economic burden on low and middle income countries (LMCs). Breast cancer is the most common cancer among women globally and current studies have shown that a woman dies with the diagnosis of breast cancer every thirteen minutes. The economic cost of breast cancer is overwhelming particularly for developing economies. While it causes billion of dollar in losses of national income, it pushes millions of people below poverty line. This study examined the economic burden of breast cancer on Nigerian women, its impacts on their standard of living and its effects on Nigeria’s socio economic development. The study adopts a qualitative research approach using the in-depth interview technique to elicit valuable information from respondents with cancer experience from the Southern part of Nigeria. Respondents constituted women in their reproductive age (15-49 years) that have experienced and survived cancer and also those that are currently receiving treatment. Excerpts from the interviews revealed that the cost of treatment is one of the major factors contributing to the late presentation of breast cancer incidences among women as many of them could not afford to pay for their own treatment. The study also revealed that many women prefer to explore other options such as herbal treatments and spiritual consultations which is less expensive and affordable. The study therefore concludes that breast cancer diagnosis and treatment should be subsidized by the government in order to facilitate easy access and affordability thereby promoting early detection and reducing the economic burden of treatment on women.Keywords: breast cancer, development, economic burden, women
Procedia PDF Downloads 357955 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 263954 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images
Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang
Abstract:
Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network
Procedia PDF Downloads 90953 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 370952 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector
Authors: Mayesha Tahsin, A.S. Mollah
Abstract:
Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient
Procedia PDF Downloads 157951 Aten Years Rabies Data Exposure and Death Surveillance Data Analysis in Tigray Region, Ethiopia, 2023
Authors: Woldegerima G. Medhin, Tadele Araya
Abstract:
Background: Rabies is acute viral encephalitis affecting mainly carnivores and insectivorous but can affect any mammal. Case fatality rate is 100% once clinical signs appear. Rabies has a worldwide distribution in continental regions of Asia and Africa. Globally, rabies is responsible for more than 61000 human deaths annually. An estimation of human mortality rabies in Asia and Africa annually exceed 35172 and 21476 respectively. Ethiopia approximately 2900 people were estimated to die of rabies annually, Tigary region approximately 98 people were estimated to die annually. The aim of this study is to analyze trends, describe, and evaluate the ten years rabies data in Tigray, Ethiopia. Methods: We conducted descriptive epidemiological study from 15-30 February, 2023 of rabies exposure and death in humans by reviewing the health management information system report from Tigray Regional Health Bureau and vaccination coverage of dog population from 2013 to 2022. We used case definition, suspected cases are those bitten by the dogs displaying clinical signs consistent with rabies and confirmed cases were deaths from rabies at time of the exposure. Results: A total 21031 dog bites and 375 deaths report of rabies and 18222 post exposure treatments for humans in Tigray region were used. A suspected rabies patients had shown an increasing trend from 2013 to 2015 and 2018 to 2019. Overall mortality rate was 19/1000 in Tigray. Majority of suspected patients (45%) were age <15 years old. An estimated by Agriculture Bureau of Tigray Region about 12000 owned and 2500 stray dogs are available in the region, but yearly dog vaccination remains low (50%). Conclusion: Rabies is a public health problem in Tigray region. It is highly recommended to vaccinate individually owned dogs and concerned sectors should eliminate stray dogs. Surveillance system should strengthen for estimating the real magnitude, launch preventive and control measures.Keywords: rabies, Virus, transmision, prevalence
Procedia PDF Downloads 71950 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 155949 SEM Detection of Folate Receptor in a Murine Breast Cancer Model Using Secondary Antibody-Conjugated, Gold-Coated Magnetite Nanoparticles
Authors: Yasser A. Ahmed, Juleen M Dickson, Evan S. Krystofiak, Julie A. Oliver
Abstract:
Cancer cells urgently need folate to support their rapid division. Folate receptors (FR) are over-expressed on a wide range of tumor cells, including breast cancer cells. FR are distributed over the entire surface of cancer cells, but are polarized to the apical surface of normal cells. Targeting of cancer cells using specific surface molecules such as folate receptors may be one of the strategies used to kill cancer cells without hurting the neighing normal cells. The aim of the current study was to try a method of SEM detecting FR in a murine breast cancer cell model (4T1 cells) using secondary antibody conjugated to gold or gold-coated magnetite nanoparticles. 4T1 cells were suspended in RPMI medium witth FR antibody and incubated with secondary antibody for fluorescence microscopy. The cells were cultured on 30mm Thermanox coverslips for 18 hours, labeled with FR antibody then incubated with secondary antibody conjugated to gold or gold-coated magnetite nanoparticles and processed to scanning electron microscopy (SEM) analysis. The fluorescence microscopy study showed strong punctate FR expression on 4T1 cell membrane. With SEM, the labeling with gold or gold-coated magnetite conjugates showed a similar pattern. Specific labeling occurred in nanoparticle clusters, which are clearly visualized in backscattered electron images. The 4T1 tumor cell model may be useful for the development of FR-targeted tumor therapy using gold-coated magnetite nano-particles.Keywords: cancer cell, nanoparticles, cell culture, SEM
Procedia PDF Downloads 732948 Toxicity of PPCPs on Adapted Sludge Community
Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton
Abstract:
Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.Keywords: adapted sludge community, mixture, PPCPs, toxicity
Procedia PDF Downloads 398947 Evaluation of Firearm Injury Syndromic Surveillance in Utah
Authors: E. Bennion, A. Acharya, S. Barnes, D. Ferrell, S. Luckett-Cole, G. Mower, J. Nelson, Y. Nguyen
Abstract:
Objective: This study aimed to evaluate the validity of a firearm injury query in the Early Notification of Community-based Epidemics syndromic surveillance system. Syndromic surveillance data are used at the Utah Department of Health for early detection of and rapid response to unusually high rates of violence and injury, among other health outcomes. The query of interest was defined by the Centers for Disease Control and Prevention and used chief complaint and discharge diagnosis codes to capture initial emergency department encounters for firearm injury of all intents. Design: Two epidemiologists manually reviewed electronic health records of emergency department visits captured by the query from April-May 2020, compared results, and sent conflicting determinations to two arbiters. Results: Of the 85 unique records captured, 67 were deemed probable, 19 were ruled out, and two were undetermined, resulting in a positive predictive value of 75.3%. Common reasons for false positives included non-initial encounters and misleading keywords. Conclusion: Improving the validity of syndromic surveillance data would better inform outbreak response decisions made by state and local health departments. The firearm injury definition could be refined to exclude non-initial encounters by negating words such as “last month,” “last week,” and “aftercare”; and to exclude non-firearm injury by negating words such as “pellet gun,” “air gun,” “nail gun,” “bullet bike,” and “exit wound” when a firearm is not mentioned.Keywords: evaluation, health information system, firearm injury, syndromic surveillance
Procedia PDF Downloads 165946 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 117945 Fatigue Crack Growth Rate Measurement by Means of Classic Method and Acoustic Emission
Authors: V. Mentl, V. Koula, P. Mazal, J. Volák
Abstract:
Nowadays, the acoustic emission is a widely recognized method of material damage investigation, mainly in cases of cracks initiation and growth observation and evaluation. This is highly important in structures, e.g. pressure vessels, large steam turbine rotors etc., applied both in classic and nuclear power plants. Nevertheless, the acoustic emission signals must be correlated with the real crack progress to be able to evaluate the cracks and their growth by this non-destructive technique alone in real situations and to reach reliable results when the assessment of the structures' safety and reliability is performed and also when the remaining lifetime should be evaluated. The main aim of this study was to propose a methodology for evaluation of the early manifestations of the fatigue cracks and their growth and thus to quantify the material damage by acoustic emission parameters. Specimens made of several steels used in the power producing industry were subjected to fatigue loading in the low- and high-cycle regimes. This study presents results of the crack growth rate measurement obtained by the classic compliance change method and the acoustic emission signal analysis. The experiments were realized in cooperation between laboratories of Brno University of Technology and West Bohemia University in Pilsen within the solution of the project of the Czech Ministry of Industry and Commerce: "A diagnostic complex for the detection of pressure media and material defects in pressure components of nuclear and classic power plants" and the project “New Technologies for Mechanical Engineering”.Keywords: fatigue, crack growth rate, acoustic emission, material damage
Procedia PDF Downloads 370944 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City
Authors: Sultan Ahmad Azizi, Gaurang J. Joshi
Abstract:
Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport
Procedia PDF Downloads 260943 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia
Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze
Abstract:
Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image
Procedia PDF Downloads 273942 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 263941 Assessment of Airtightness Through a Standardized Procedure in a Nearly-Zero Energy Demand House
Authors: Mar Cañada Soriano, Rafael Royo-Pastor, Carolina Aparicio-Fernández, Jose-Luis Vivancos
Abstract:
The lack of insulation, along with the existence of air leakages, constitute a meaningful impact on the energy performance of buildings. Both of them lead to increases in the energy demand through additional heating and/or cooling loads. Additionally, they cause thermal discomfort. In order to quantify these uncontrolled air currents, pressurization and depressurization tests can be performed. Among them, the Blower Door test is a standardized procedure to determine the airtightness of a space which characterizes the rate of air leakages through the envelope surface, calculating to this purpose an air flow rate indicator. In this sense, the low-energy buildings complying with the Passive House design criteria are required to achieve high levels of airtightness. Due to the invisible nature of air leakages, additional tools are often considered to identify where the infiltrations take place. Among them, the infrared thermography entails a valuable technique to this purpose since it enables their detection. The aim of this study is to assess the airtightness of a typical Mediterranean dwelling house located in the Valencian orchad (Spain) restored under the Passive House standard using to this purpose the blower-door test. Moreover, the building energy performance modelling tools TRNSYS (TRaNsient System Simulation program) and TRNFlow (TRaNsient Flow) have been used to determine its energy performance, and the infiltrations’ identification was carried out by means of infrared thermography. The low levels of infiltrations obtained suggest that this house may comply with the Passive House standard.Keywords: airtightness, blower door, trnflow, infrared thermography
Procedia PDF Downloads 122940 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 102939 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea
Authors: Kyomin Lee, Joohee Kim, Sangho Kang
Abstract:
The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste
Procedia PDF Downloads 208938 16s rRNA Based Metagenomic Analysis of Palm Sap Samples From Bangladesh
Authors: Ágota Ábrahám, Md Nurul Islam, Karimane Zeghbib, Gábor Kemenesi, Sazeda Akter
Abstract:
Collecting palm sap as a food source is an everyday practice in some parts of the world. However, the consumption of palm juice has been associated with regular infections and epidemics in parts of Bangladesh. This is attributed to fruit-eating bats and other vertebrates or invertebrates native to the area, contaminating the food with their body secretions during the collection process. The frequent intake of palm juice, whether as a processed food product or in its unprocessed form, is a common phenomenon in large areas. The range of pathogens suitable for human infection resulting from this practice is not yet fully understood. Additionally, the high sugar content of the liquid makes it an ideal culture medium for certain bacteria, which can easily propagate and potentially harm consumers. Rapid diagnostics, especially in remote locations, could mitigate health risks associated with palm juice consumption. The primary objective of this research is the rapid genomic detection and risk assessment of bacteria that may cause infections in humans through the consumption of palm juice. Utilizing state-of-the-art third-generation Nanopore metagenomic sequencing technology based on 16S rRNA, and identified bacteria primarily involved in fermenting processes. The swift metagenomic analysis, coupled with the widespread availability and portability of Nanopore products (including real-time analysis options), proves advantageous for detecting harmful pathogens in food sources without relying on extensive industry resources and testing.Keywords: raw date palm sap, NGS, metabarcoding, food safety
Procedia PDF Downloads 53937 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 443936 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 152935 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 49934 Detection of Trends and Break Points in Climatic Indices: The Case of Umbria Region in Italy
Authors: A. Flammini, R. Morbidelli, C. Saltalippi
Abstract:
The increase of air surface temperature at global scale is a fact, with values around 0.85 ºC since the late nineteen century, as well as a significant change in main features of rainfall regime. Nevertheless, the detected climatic changes are not equally distributed all over the world, but exhibit specific characteristics in different regions. Therefore, studying the evolution of climatic indices in different geographical areas with a prefixed standard approach becomes very useful in order to analyze the existence of climatic trend and compare results. In this work, a methodology to investigate the climatic change and its effects on a wide set of climatic indices is proposed and applied at regional scale in the case study of a Mediterranean area, Umbria region in Italy. From data of the available temperature stations, nine temperature indices have been obtained and the existence of trends has been checked by applying the non-parametric Mann-Kendall test, while the non-parametric Pettitt test and the parametric Standard Normal Homogeneity Test (SNHT) have been applied to detect the presence of break points. In addition, aimed to characterize the rainfall regime, data from 11 rainfall stations have been used and a trend analysis has been performed on cumulative annual rainfall depth, daily rainfall, rainy days, and dry periods length. The results show a general increase in any temperature indices, even if with a trend pattern dependent of indices and stations, and a general decrease of cumulative annual rainfall and average daily rainfall, with a time rainfall distribution over the year different from the past.Keywords: climatic change, temperature, rainfall regime, trend analysis
Procedia PDF Downloads 118933 Invasion of Pectinatella magnifica in Freshwater Resources of the Czech Republic
Authors: J. Pazourek, K. Šmejkal, P. Kollár, J. Rajchard, J. Šinko, Z. Balounová, E. Vlková, H. Salmonová
Abstract:
Pectinatella magnifica (Leidy, 1851) is an invasive freshwater animal that lives in colonies. A colony of Pectinatella magnifica (a gelatinous blob) can be up to several feet in diameter large and under favorable conditions it exhibits an extreme growth rate. Recently European countries around rivers of Elbe, Oder, Danube, Rhine and Vltava have confirmed invasion of Pectinatella magnifica, including freshwater reservoirs in South Bohemia (Czech Republic). Our project (Czech Science Foundation, GAČR P503/12/0337) is focused onto biology and chemistry of Pectinatella magnifica. We monitor the organism occurrence in selected South Bohemia ponds and sandpits during the last years, collecting information about physical properties of surrounding water, and sampling the colonies for various analyses (classification, maps of secondary metabolites, toxicity tests). Because the gelatinous matrix is during the colony lifetime also a host for algae, bacteria and cyanobacteria (co-habitants), in this contribution, we also applied a high performance liquid chromatography (HPLC) method for determination of potentially present cyanobacterial toxins (microcystin-LR, microcystin-RR, nodularin). Results from the last 3-year monitoring show that these toxins are under limit of detection (LOD), so that they do not represent a danger yet. The final goal of our study is to assess toxicity risks related to fresh water resources invaded by Pectinatella magnifica, and to understand the process of invasion, which can enable to control it.Keywords: cyanobacteria, fresh water resources, Pectinatella magnifica invasion, toxicity monitoring
Procedia PDF Downloads 238932 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products
Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad
Abstract:
The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.Keywords: Alumina-Coated magnetite nanoparticles, Magnetic Mixed Hemimicell Solid-Phase Extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample
Procedia PDF Downloads 291931 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity
Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar
Abstract:
Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics
Procedia PDF Downloads 392930 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings
Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo
Abstract:
The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.Keywords: building structure, seismic waves, spectral analysis, structural response
Procedia PDF Downloads 399929 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi
Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur
Abstract:
Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi
Procedia PDF Downloads 244928 Extreme Heat and Workforce Health in Southern Nevada
Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria
Abstract:
Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.Keywords: heat, health, hazards, workforce
Procedia PDF Downloads 103