Search results for: receiver operator curve (ROC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1668

Search results for: receiver operator curve (ROC)

228 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 388
227 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 71
226 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 75
225 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 151
224 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers

Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan

Abstract:

Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.

Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality

Procedia PDF Downloads 101
223 Predictive Value Modified Sick Neonatal Score (MSNS) On Critically Ill Neonates Outcome Treated in Neonatal Intensive Care Unit (NICU)

Authors: Oktavian Prasetia Wardana, Martono Tri Utomo, Risa Etika, Kartika Darma Handayani, Dina Angelika, Wurry Ayuningtyas

Abstract:

Background: Critically ill neonates are newborn babies with high-risk factors that potentially cause disability and/or death. Scoring systems for determining the severity of the disease have been widely developed as well as some designs for use in neonates. The SNAPPE-II method, which has been used as a mortality predictor scoring system in several referral centers, was found to be slow in assessing the outcome of critically ill neonates in the Neonatal Intensive Care Unit (NICU). Objective: To analyze the predictive value of MSNS on the outcome of critically ill neonates at the time of arrival up to 24 hours after being admitted to the NICU. Methods: A longitudinal observational analytic study based on medical record data was conducted from January to August 2022. Each sample was recorded from medical record data, including data on gestational age, mode of delivery, APGAR score at birth, resuscitation measures at birth, duration of resuscitation, post-resuscitation ventilation, physical examination at birth (including vital signs and any congenital abnormalities), the results of routine laboratory examinations, as well as the neonatal outcomes. Results: This study involved 105 critically ill neonates who were admitted to the NICU. The outcome of critically ill neonates was 50 (47.6%) neonates died, and 55 (52.4%) neonates lived. There were more males than females (61% vs. 39%). The mean gestational age of the subjects in this study was 33.8 ± 4.28 weeks, with the mean birth weight of the subjects being 1820.31 ± 33.18 g. The mean MSNS score of neonates with a deadly outcome was lower than that of the lived outcome. ROC curve with a cut point MSNS score <10.5 obtained an AUC of 93.5% (95% CI: 88.3-98.6) with a sensitivity value of 84% (95% CI: 80.5-94.9), specificity 80 % (CI 95%: 88.3-98.6), Positive Predictive Value (PPV) 79.2%, Negative Predictive Value (NPV) 84.6%, Risk Ratio (RR) 5.14 with Hosmer & Lemeshow test results p>0.05. Conclusion: The MSNS score has a good predictive value and good calibration of the outcomes of critically ill neonates admitted to the NICU.

Keywords: critically ill neonate, outcome, MSNS, NICU, predictive value

Procedia PDF Downloads 44
222 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation

Authors: R. Fardid, R. Coppes

Abstract:

Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.

Keywords: gene expression, heart damage, proton irradiation, radiotherapy

Procedia PDF Downloads 462
221 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution

Authors: Yasmin Begum Mohammed

Abstract:

Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.

Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique

Procedia PDF Downloads 273
220 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool

Authors: Sousan Arafeh

Abstract:

Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.

Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development

Procedia PDF Downloads 90
219 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow

Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam

Abstract:

Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.

Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety

Procedia PDF Downloads 267
218 Risk Analysis of Flood Physical Vulnerability in Residential Areas of Mathare Nairobi, Kenya

Authors: James Kinyua Gitonga, Toshio Fujimi

Abstract:

Vulnerability assessment and analysis is essential to solving the degree of damage and loss as a result of natural disasters. Urban flooding causes a major economic loss and casualties, at Mathare residential area in Nairobi, Kenya. High population caused by rural-urban migration, Unemployment, and unplanned urban development are among factors that increase flood vulnerability in Mathare area. This study aims to analyse flood risk physical vulnerabilities in Mathare based on scientific data, research data that includes the Rainfall data, River Mathare discharge rate data, Water runoff data, field survey data and questionnaire survey through sampling of the study area have been used to develop the risk curves. Three structural types of building were identified in the study area, vulnerability and risk curves were made for these three structural types by plotting the relationship between flood depth and damage for each structural type. The results indicate that the structural type with mud wall and mud floor is the most vulnerable building to flooding while the structural type with stone walls and concrete floor is least vulnerable. The vulnerability of building contents is mainly determined by the number of floors, where households with two floors are least vulnerable, and households with a one floor are most vulnerable. Therefore more than 80% of the residential buildings including the property in the building are highly vulnerable to floods consequently exposed to high risk. When estimating the potential casualties/injuries we discovered that the structural types of houses were major determinants where the mud/adobe structural type had casualties of 83.7% while the Masonry structural type had casualties of 10.71% of the people living in these houses. This research concludes that flood awareness, warnings and observing the building codes will enable reduce damage to the structural types of building, deaths and reduce damage to the building contents.

Keywords: flood loss, Mathare Nairobi, risk curve analysis, vulnerability

Procedia PDF Downloads 213
217 Surface Water Flow of Urban Areas and Sustainable Urban Planning

Authors: Sheetal Sharma

Abstract:

Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.

Keywords: runoff, built up, roughness, recharge, temporal changes

Procedia PDF Downloads 255
216 Application of Seismic Refraction Method in Geotechnical Study

Authors: Abdalla Mohamed M. Musbahi

Abstract:

The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).

Keywords: application of seismic, geotechnical study, physical properties, seismic refraction

Procedia PDF Downloads 466
215 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 115
214 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector

Authors: Dewan Ahsan

Abstract:

Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.

Keywords: green energy, offshore, safety, Denmark

Procedia PDF Downloads 190
213 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 103
212 Open Joint Surgery for Temporomandibular Joint Internal Derangement: Wilkes Stages III-V

Authors: T. N. Goh, M. Hashmi, O. Hussain

Abstract:

Temporomandibular joint (TMJ) dysfunction (TMD) is a condition that may affect patients via restricted mouth opening, significant pain during normal functioning, and/or reproducible joint noise. TMD includes myofascial pain, TMJ functional derangements (internal derangement, dislocation), and TMJ degenerative/inflammatory joint disease. Internal derangement (ID) is the most common cause of TMD-related clicking and locking. These patients are managed in a stepwise approach, from patient education (homecare advice and analgesia), splint therapy, physiotherapy, botulinum toxin treatment, to arthrocentesis. Arthrotomy is offered when the aforementioned treatment options fail to alleviate symptoms and improve quality of life. The aim of this prospective study was to review the outcomes of jaw joint open surgery in TMD patients. Patients who presented from 2015-2022 at the Oral and Maxillofacial Surgery Department in the Doncaster NHS Foundation Trust, UK, with a Wilkes classification of III -V were included. These patients underwent either i) discopexy with bone-anchoring suture (9); ii) intrapositional temporalis flap (ITF) with bone-anchoring suture (3); iii) eminoplasty and discopexy with suturing to the capsule (3); iii) discectomy + ITF with bone-anchoring suture (1); iv) discoplasty + bone-anchoring suture (1); v) ITF (1). Maximum incisal opening (MIO) was assessed pre-operatively and at each follow-up. Pain score, determined via the visual analogue scale (VAS, with 0 being no pain and 10 being the worst pain), was also recorded. A total of 18 eligible patients were identified with a mean age of 45 (range 22 - 79), of which 16 were female. The patients were scored by Wilkes Classification as III (14), IV (1), or V (4). Twelve patients had anterior disc displacement without reduction (66%) and six had degenerative/arthritic changes (33%) to the TMJ. The open joint procedure resulted in an increase in MIO and reduction in pain VAS and for the majority of patients, across all Wilkes Classifications. Pre-procedural MIO was 22.9 ± 7.4 mm and VAS was 7.8 ± 1.5. At three months post-procedure there was an increase in MIO to 34.4 ± 10.4 mm (p < 0.01) and a decrease in the VAS to 1.5 ± 2.9 (p < 0.01). Three patients were lost to follow-up prior to six months. Six were discharged at six month review and five patients were discharged at 12 months review as they were asymptomatic with good mouth opening. Four patients are still attending for annual botulinum toxin treatment. Two patients (Wilkes III and V) subsequently underwent TMJ replacement (11%). One of these patients (Wilkes III) had improvement initially to MIO of 40 mm, but subsequently relapsed to less than 20 mm due to lack of compliance with jaw rehabilitation device post-operatively. Clinical improvements in 89% of patients within the study group were found, with a return to near normal MIO range and reduced pain score. Intraoperatively, the operator found bone-anchoring suture used for discopexy/discoplasty more secure than the soft tissue anchoring suturing technique.

Keywords: bone anchoring suture, open temporomandibular joint surgery, temporomandibular joint, temporomandibular joint dysfunction

Procedia PDF Downloads 79
211 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 48
210 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 179
209 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines

Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez

Abstract:

Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.

Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera

Procedia PDF Downloads 122
208 A Study on the Effect of Different Climate Conditions on Time of Balance of Bleeding and Evaporation in Plastic Shrinkage Cracking of Concrete Pavements

Authors: Hasan Ziari, Hassan Fazaeli, Seyed Javad Vaziri Kang Olyaei, Asma Sadat Dabiri

Abstract:

The presence of cracks in concrete pavements is a place for the ingression of corrosive substances, acids, oils, and water into the pavement and reduces its long-term durability and level of service. One of the causes of early cracks in concrete pavements is the plastic shrinkage. This shrinkage occurs due to the formation of negative capillary pressures after the equilibrium of the bleeding and evaporation rates at the pavement surface. These cracks form if the tensile stresses caused by the restrained shrinkage exceed the tensile strength of the concrete. Different climate conditions change the rate of evaporation and thus change the balance time of the bleeding and evaporation, which changes the severity of cracking in concrete. The present study examined the relationship between the balance time of bleeding and evaporation and the area of cracking in the concrete slabs using the standard method ASTM C1579 in 27 different environmental conditions by using continuous video recording and digital image analyzing. The results showed that as the evaporation rate increased and the balance time decreased, the crack severity significantly increased so that by reducing the balance time from the maximum value to its minimum value, the cracking area increased more than four times. It was also observed that the cracking area- balance time curve could be interpreted in three sections. An examination of these three parts showed that the combination of climate conditions has a significant effect on increasing or decreasing these two variables. The criticality of a single factor cannot cause the critical conditions of plastic cracking. By combining two mild environmental factors with a severe climate factor (in terms of surface evaporation rate), a considerable reduction in balance time and a sharp increase in cracking severity can be prevented. The results of this study showed that balance time could be an essential factor in controlling and predicting plastic shrinkage cracking in concrete pavements. It is necessary to control this factor in the case of constructing concrete pavements in different climate conditions.

Keywords: bleeding and cracking severity, concrete pavements, climate conditions, plastic shrinkage

Procedia PDF Downloads 124
207 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 50
206 Processing and Characterization of Oxide Dispersion Strengthened (ODS) Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) Ferritic Steel

Authors: Farha Mizana Shamsudin, Shahidan Radiman, Yusof Abdullah, Nasri Abdul Hamid

Abstract:

Oxide dispersion strengthened (ODS) ferritic steels are amongst the most promising candidates for large scale structural materials to be applied in next generation fission and fusion nuclear power reactors. This kind of material is relatively stable at high temperature, possess remarkable mechanical properties and comparatively good resistance from neutron radiation damage. The superior performance of ODS ferritic steels over their conventional properties is attributed to the high number density of nano-sized dispersoids that act as nucleation sites and stable sinks for many small helium bubbles resulting from irradiation, and also as pinning points to dislocation movement and grain growth. ODS ferritic steels are usually produced by powder metallurgical routes involving mechanical alloying (MA) process of Y2O3 and pre-alloyed or elemental metallic powders, and then consolidated by hot isostatic pressing (HIP) or hot extrusion (HE) techniques. In this study, Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (designated as 14YWT) was produced by mechanical alloying process and followed by hot isostatic pressing (HIP) technique. Crystal structure and morphology of this sample were identified and characterized by using X-ray Diffraction (XRD) and field emission scanning electron microscope (FESEM) respectively. The magnetic measurement of this sample at room temperature was carried out by using a vibrating sample magnetometer (VSM). FESEM micrograph revealed a homogeneous microstructure constituted by fine grains of less than 650 nm in size. The ultra-fine dispersoids of size between 5 nm to 19 nm were observed homogeneously distributed within the BCC matrix. The EDS mapping reveals that the dispersoids contain Y-Ti-O nanoclusters and from the magnetization curve plotted by VSM, this sample approaches the behavior of soft ferromagnetic materials. In conclusion, ODS Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) ferritic steel was successfully produced by HIP technique in this present study.

Keywords: hot isostatic pressing, magnetization, microstructure, ODS ferritic steel

Procedia PDF Downloads 293
205 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession

Procedia PDF Downloads 156
204 A Foodborne Cholera Outbreak in a School Caused by Eating Contaminated Fried Fish: Hoima Municipality, Uganda, February 2018

Authors: Dativa Maria Aliddeki, Fred Monje, Godfrey Nsereko, Benon Kwesiga, Daniel Kadobera, Alex Riolexus Ario

Abstract:

Background: Cholera is a severe gastrointestinal disease caused by Vibrio cholera. It has caused several pandemics. On 26 February 2018, a suspected cholera outbreak, with one death, occurred in School X in Hoima Municipality, western Uganda. We investigated to identify the scope and mode of transmission of the outbreak, and recommend evidence-based control measures. Methods: We defined a suspected case as onset of diarrhea, vomiting, or abdominal pain in a student or staff of School X or their family members during 14 February–10 March. A confirmed case was a suspected case with V. cholerae cultured from stool. We reviewed medical records at Hoima Hospital and searched for cases at School X. We conducted descriptive epidemiologic analysis and hypothesis-generating interviews of 15 case-patients. In a retrospective cohort study, we compared attack rates between exposed and unexposed persons. Results: We identified 15 cases among 75 students and staff of School X and their family members (attack rate=20%), with onset from 25-28 February. One patient died (case-fatality rate=6.6%). The epidemic curve indicated a point-source exposure. On 24 February, a student brought fried fish from her home in a fishing village, where a cholera outbreak was ongoing. Of the 21 persons who ate the fish, 57% developed cholera, compared with 5.6% of 54 persons who did not eat (RR=10; 95% CI=3.2-33). None of 4 persons who recooked the fish before eating, compared with 71% of 17 who did not recook it, developed cholera (RR=0.0, 95%CIFisher exact=0.0-0.95). Of 12 stool specimens cultured, 6 yielded V. cholerae. Conclusion: This cholera outbreak was caused by eating fried fish, which might have been contaminated with V. cholerae in a village with an ongoing outbreak. Lack of thorough cooking of the fish might have facilitated the outbreak. We recommended thoroughly cooking fish before consumption.

Keywords: cholera, disease outbreak, foodborne, global health security, Uganda

Procedia PDF Downloads 171
203 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions

Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba

Abstract:

Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.

Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial

Procedia PDF Downloads 150
202 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 258
201 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 135
200 Development of Adsorbents for Removal of Hydrogen Sulfide and Ammonia Using Pyrolytic Carbon Black form Waste Tires

Authors: Yang Gon Seo, Chang-Joon Kim, Dae Hyeok Kim

Abstract:

It is estimated that 1.5 billion tires are produced worldwide each year which will eventually end up as waste tires representing a major potential waste and environmental problem. Pyrolysis has been great interest in alternative treatment processes for waste tires to produce valuable oil, gas and solid products. The oil and gas products may be used directly as a fuel or a chemical feedstock. The solid produced from the pyrolysis of tires ranges typically from 30 to 45 wt% and have high carbon contents of up to 90 wt%. However, most notably the solid have high sulfur contents from 2 to 3 wt% and ash contents from 8 to 15 wt% related to the additive metals. Upgrading tire pyrolysis products to high-value products has concentrated on solid upgrading to higher quality carbon black and to activated carbon. Hydrogen sulfide and ammonia are one of the common malodorous compounds that can be found in emissions from many sewages treatment plants and industrial plants. Therefore, removing these harmful gasses from emissions is of significance in both life and industry because they can cause health problems to human and detrimental effects on the catalysts. In this work, pyrolytic carbon black from waste tires was used to develop adsorbent with good adsorption capacity for removal of hydrogen and ammonia. Pyrolytic carbon blacks were prepared by pyrolysis of waste tire chips ranged from 5 to 20 mm under the nitrogen atmosphere at 600℃ for 1 hour. Pellet-type adsorbents were prepared by a mixture of carbon black, metal oxide and sodium hydroxide or hydrochloric acid, and their adsorption capacities were estimated by using the breakthrough curve of a continuous fixed bed adsorption column at ambient condition. The adsorbent was manufactured with a mixture of carbon black, iron oxide(III), and sodium hydroxide showed the maximum working capacity of hydrogen sulfide. For ammonia, maximum working capacity was obtained by the adsorbent manufactured with a mixture of carbon black, copper oxide(II), and hydrochloric acid.

Keywords: adsorbent, ammonia, pyrolytic carbon black, hydrogen sulfide, metal oxide

Procedia PDF Downloads 229
199 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 122