Search results for: drug property prediction
401 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 266400 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids
Authors: Markus Rütten, Olaf Wünsch
Abstract:
Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding
Procedia PDF Downloads 297399 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178398 Motives for Reshoring from China to Europe: A Hierarchical Classification of Companies
Authors: Fabienne Fel, Eric Griette
Abstract:
Reshoring, whether concerning back-reshoring or near-reshoring, is a quite recent phenomenon. Despite the economic and political interest of this topic, academic research questioning determinants of reshoring remains rare. Our paper aims at contributing to fill this gap. In order to better understand the reasons for reshoring, we conducted a study among 280 French firms during spring 2016, three-quarters of which sourced, or source, in China. 105 firms in the sample have reshored all or part of their Chinese production or supply in recent years, and we aimed to establish a typology of the motives that drove them to this decision. We asked our respondents about the history of their Chinese supplies, their current reshoring strategies, and their motivations. Statistical analysis was performed with SPSS 22 and SPAD 8. Our results show that change in commercial and financial terms with China is the first motive explaining the current reshoring movement from this country (it applies to 54% of our respondents). A change in corporate strategy is the second motive (30% of our respondents); the reshoring decision follows a change in companies’ strategies (upgrading, implementation of a CSR policy, or a 'lean management' strategy). The third motive (14% of our sample) is a mere correction of the initial offshoring decision, considered as a mistake (under-estimation of hidden costs, non-quality and non-responsiveness problems). Some authors emphasize that developing a short supply chain, involving geographic proximity between design and production, gives a competitive advantage to companies wishing to offer innovative products. Admittedly 40% of our respondents indicate that this motive could have played a part in their decision to reshore, but this reason was not enough for any of them and is not an intrinsic motive leading to leaving Chinese suppliers. Having questioned our respondents about the importance given to various problems leading them to reshore, we then performed a Principal Components Analysis (PCA), associated with an Ascending Hierarchical Classification (AHC), based on Ward criterion, so as to point out more specific motivations. Three main classes of companies should be distinguished: -The 'Cost Killers' (23% of the sample), which reshore their supplies from China only because of higher procurement costs and so as to find lower costs elsewhere. -The 'Realists' (50% of the sample), giving equal weight or importance to increasing procurement costs in China and to the quality of their supplies (to a large extend). Companies being part of this class tend to take advantage of this changing environment to change their procurement strategy, seeking suppliers offering better quality and responsiveness. - The 'Voluntarists' (26% of the sample), which choose to reshore their Chinese supplies regardless of higher Chinese costs, to obtain better quality and greater responsiveness. We emphasize that if the main driver for reshoring from China is indeed higher local costs, it is should not be regarded as an exclusive motivation; 77% of the companies in the sample, are also seeking, sometimes exclusively, more reactive suppliers, liable to quality, respect for the environment and intellectual property.Keywords: China, procurement, reshoring, strategy, supplies
Procedia PDF Downloads 326397 The Impact of Artificial Intelligence on Food Industry
Authors: George Hanna Abdelmelek Henien
Abstract:
Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos
Procedia PDF Downloads 63396 Unlocking New Room of Production in Brown Field; Integration of Geological Data Conditioned 3D Reservoir Modelling of Lower Senonian Matulla Formation, RAS Budran Field, East Central Gulf of Suez, Egypt
Authors: Nader Mohamed
Abstract:
The Late Cretaceous deposits are well developed through-out Egypt. This is due to a transgression phase associated with the subsidence caused by the neo-Tethyan rift event that took place across the northern margin of Africa, resulting in a period of dominantly marine deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian Matulla formations are the most important clastic sequence in the Nezzazat Group because they provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and subdivide the Matulla formation into three units. The lower unit is believed to be the main reservoir where it consists mainly of sands with shale and sandy carbonates, while the other units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an effective technique that assists in reservoir management as decisions concerning development and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as accurately as possible in order to better evaluate, calculate the reserves and to determine the most effective way of recovering as much of the petroleum economically as possible. All available data on Matulla formation are used to build the reservoir structure model, lithofacies, porosity, permeability and water saturation models which are the main parameters that describe the reservoirs and provide information on effective evaluation of the need to develop the oil potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and facies environment interpretation which helped in defining the nature of deposition of Matulla formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the spatial distribution of property and in addition evaluating the unlocked new reservoir areas of Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This study led to adding a new room of production and additional reserves to Ras Budran field. Keywords: geology, oil and gas, geoscience, sequence stratigraphy
Procedia PDF Downloads 106395 Properties Optimization of Keratin Films Produced by Film Casting and Compression Moulding
Authors: Mahamad Yousif, Eoin Cunningham, Beatrice Smyth
Abstract:
Every year ~6 million tonnes of feathers are produced globally. Due to feathers’ low density and possible contamination with pathogens, their disposal causes health and environmental problems. The extraction of keratin, which represents >90% of feathers’ dry weight, could offer a solution due to its wide range of applications in the food, medical, cosmetics, and biopolymer industries. One of these applications is the production of biofilms which can be used for packaging, edible films, drug delivery, wound healing etc. Several studies in the last two decades investigated keratin film production and its properties. However, the effects of many parameters on the properties of the films remain to be investigated including the extraction method, crosslinker type and concentration, and the film production method. These parameters were investigated in this study. Keratin was extracted from chicken feathers using two methods, alkaline extraction with 0.5 M NaOH at 80 °C or sulphitolysis extraction with 0.5 M sodium sulphite, 8 M urea, and 0.25-1 g sodium dodecyl sulphate (SDS) at 100 °C. The extracted keratin was mixed with different types and concentrations of plasticizers (glycerol and polyethylene glycol) and crosslinkers (formaldehyde (FA), glutaraldehyde, cinnamaldehyde, glyoxal, and 1,4-Butanediol diglycidyl ether (BDE)). The mixtures were either cast in a mould or compression moulded to produce films. For casting, keratin powder was initially dissolved in water to form a 5% keratin solution and the mixture was dried in an oven at 60 °C. For compression moulding, 10% water was added and the compression moulding temperature and pressure were in the range of 60-120 °C and 10-30 bar. Finally, the tensile properties, solubility, and transparency of the films were analysed. The films prepared using the sulphitolysis keratin had superior tensile properties to the alkaline keratin and formed successfully with lower plasticizer concentrations. Lowering the SDS concentration from 1 to 0.25 g/g feathers improved all the tensile properties. All the films prepared without crosslinkers were 100% water soluble but adding crosslinkers reduced solubility to as low as 21%. FA and BDE were found to be the best crosslinkers increasing the tensile strength and elongation at break of the films. Higher compression moulding temperature and pressure lowered the tensile properties of the films; therefore, 80 °C and 10 bar were considered to be the optimal compression moulding temperature and pressure. Nevertheless, the films prepared by casting had higher tensile properties than compression moulding but were less transparent. Two optimal films, prepared by film casting, were identified and their compositions were: (a) Sulphitolysis keratin, 20% glycerol, 10% FA, and 10% BDE. (b) Sulphitolysis keratin, 20% glycerol, and 10% BDE. Their tensile strength, elongation at break, Young’s modulus, solubility, and transparency were: (a) 4.275±0.467 MPa, 86.12±4.24%, 22.227±2.711 MPa, 21.34±1.11%, and 8.57±0.94* respectively. (b) 3.024±0.231 MPa, 113.65±14.61%, 10±1.948 MPa, 25.03±5.3%, and 4.8±0.15 respectively. A higher value indicates that the film is less transparent. The extraction method, film composition, and production method had significant influence on the properties of keratin films and should therefore be tailored to meet the desired properties and applications.Keywords: compression moulding, crosslinker, film casting, keratin, plasticizer, solubility, tensile properties, transparency
Procedia PDF Downloads 34394 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 62393 Vagal Nerve Stimulator as a Treatment Approach in CHARGE Syndrome: A Case Report
Authors: Roya Vakili, Lekaa Elhajjmoussa, Barzin Omidi-Shal, Kim Blake
Abstract:
Objective: The purpose of this case report is to highlight the successful treatment of a patient with Coloboma, Heart defect, Atresia choanae, Retarded growth and development, Genital hypoplasia, Ear anomalies/deafness, (CHARGE syndrome) using a vagal nerve stimulator (VNS). Background: This is the first documented case report, to the authors' best knowledge, for a patient with CHARGE syndrome, epilepsy, autism, and postural orthostatic tachycardia syndrome (POTS) that was successfully treated with an implanted VNS therapeutic device. Methodology: The study is a case report. Results: This is the case of a 24-year-old female patient with CHARGE syndrome (non-random association of anomalies Coloboma, Heart defect, Atresia choanae, Retarded growth and development, Genital hypoplasia, Ear anomalies/deafness) and several other comorbidities including refractory epilepsy, Patent Ductus Arteriosus (PDA) and POTS who had significant improvement of her symptoms after VNS implantation. She was a VNS candidate given her longstanding history of drug-resistant epilepsy and current disposition secondary to CHARGE syndrome. Prior to VNS implantation, she experienced three generalized seizures a year and daily POTS-related symptoms. She was having frequent lightheadedness and syncope spells due to a rapid heart rate and low blood pressure. The VNS device was set to detect a rapid heart rate and send appropriate stimulation anytime the heart rate exceeded 20% of the patient’s normal baseline. The VNS device demonstrated frequent elevated heart rates and concurrent VNS release every 8 minutes in addition to the programmed events. Following VNS installation, the patient became more active, alert, and communicative and was able to verbally communicate with words she was unable to say prior. Her GI symptoms also improved, as she was able to tolerate food better orally in addition to her G and J tube, likely another result of the vagal nerve stimulation. Additionally, the patient’s seizures and POTS-related cardiac events appeared to be well controlled. She had prolonged electroencephalogram (EEG) testing, showing no significant change in epileptiform activity. Improvements in the patient’s disposition are believed to be secondary to parasympathetic stimulation, adequate heart rate control, and GI stimulation, in addition to behavioral changes and other benefits via her implanted VNS. Conclusion: VNS showed promising results in improving the patient's quality of life and managing her diverse symptoms, including dysautonomia, POTs, gastrointestinal mobility, cognitive functioning as well seizure control.Keywords: autism, POTs, CHARGE, VNS
Procedia PDF Downloads 85392 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 341391 Antineoplastic Effect of Tridham and Penta Galloyl Glucose in Experimental Mammary Carcinoma Bearing Rats
Authors: Karthick Dharmalingam, Stalin Ramakrishnan, Haseena Banu Hedayathullah Khan, Sachidanandanam Thiruvaiyaru Panchanadham, Shanthi Palanivelu
Abstract:
Background: Breast cancer is arising as the most dreadful cancer affecting women worldwide. Hence, there arises a need to search and test for new drugs. Herbal formulations used in Siddha preparations are proved to be effective against various types of cancer. They also offer advantage through synergistic amplification and diminish any possible adverse effects. Tridham (TD) is a herbal formulation prepared in our laboratory consisting of Terminalia chebula, Elaeocarpus ganitrus and Prosopis cineraria in a definite ratio and has been used for the treatment of mammary carcinoma. Objective: To study the restorative effect of Tridham and penta galloyl glucose (a component of TD) on DMBA induced mammary carcinoma in female Sprague Dawley rats. Materials and Methods: Rats were divided into seven groups of six animals each. Group I (Control) received corn oil. Group II– mammary carcinoma was induced by DMBA dissolved in corn oil single dose orally. Group III and Group IV were induced with DMBA and subsequently treated with Tridham and penta galloyl glucose, respectively for 48 days. Group V was treated with DMBA and subsequently with a standard drug, cyclophosphamide. Group VI and Group VII were given Tridham and penta galloyl glucose alone, respectively for 48 days. After the experimental period, the animals were sacrificed by cervical decapitation. The mammary gland tissue was excised and levels of antioxidants were determined by biochemical assay. p53 and PCNA expression were accessed using immunohistochemistry. Nrf-2, Cox-2 and caspase-3 protein expression were studied by Western Blotting analysis. p21, Bcl-2, Bax, Bad and caspase-8 gene expression were studied by RT-PCR. Results: Histopathological studies confirmed induction of mammary carcinoma in DMBA induced rats and treatment with TD and PGG resulted in regression of tumour. The levels of enzymic and non-enzymic antioxidants were decreased in DMBA induced rats when compared to control rats. The levels of cell cycle inhibitory markers and apoptotic markers were decreased in DMBA induced rats when compared to control rats. These parameters were restored to near normal levels on treatment with Tridham and PGG. Conclusion: The results of the present study indicate the antineoplastic effect of Tridham and PGG are exerted through the modulation of antioxidant status and expression of cell cycle regulatory markers as well as apoptotic markers. Acknowledgment: Financial assistance provided in the form of ICMR-SRF by Indian Council of Medical Research (ICMR), India is gratefully acknowledged here.Keywords: antioxidants, Mammary carcinoma, pentaGalloyl glucose, Tridham
Procedia PDF Downloads 278390 Coordinative Remote Sensing Observation Technology for a High Altitude Barrier Lake
Authors: Zhang Xin
Abstract:
Barrier lakes are lakes formed by storing water in valleys, river valleys or riverbeds after being blocked by landslide, earthquake, debris flow, and other factors. They have great potential safety hazards. When the water is stored to a certain extent, it may burst in case of strong earthquake or rainstorm, and the lake water overflows, resulting in large-scale flood disasters. In order to ensure the safety of people's lives and property in the downstream, it is very necessary to monitor the barrier lake. However, it is very difficult and time-consuming to manually monitor the barrier lake in high altitude areas due to the harsh climate and steep terrain. With the development of earth observation technology, remote sensing monitoring has become one of the main ways to obtain observation data. Compared with a single satellite, multi-satellite remote sensing cooperative observation has more advantages; its spatial coverage is extensive, observation time is continuous, imaging types and bands are abundant, it can monitor and respond quickly to emergencies, and complete complex monitoring tasks. Monitoring with multi-temporal and multi-platform remote sensing satellites can obtain a variety of observation data in time, acquire key information such as water level and water storage capacity of the barrier lake, scientifically judge the situation of the barrier lake and reasonably predict its future development trend. In this study, The Sarez Lake, which formed on February 18, 1911, in the central part of the Pamir as a result of blockage of the Murgab River valley by a landslide triggered by a strong earthquake with magnitude of 7.4 and intensity of 9, is selected as the research area. Since the formation of Lake Sarez, it has aroused widespread international concern about its safety. At present, the use of mechanical methods in the international analysis of the safety of Lake Sarez is more common, and remote sensing methods are seldom used. This study combines remote sensing data with field observation data, and uses the 'space-air-ground' joint observation technology to study the changes in water level and water storage capacity of Lake Sarez in recent decades, and evaluate its safety. The situation of the collapse is simulated, and the future development trend of Lake Sarez is predicted. The results show that: 1) in recent decades, the water level of Lake Sarez has not changed much and remained at a stable level; 2) unless there is a strong earthquake or heavy rain, it is less likely that the Lake Sarez will be broken under normal conditions, 3) lake Sarez will remain stable in the future, but it is necessary to establish an early warning system in the Lake Sarez area for remote sensing of the area, 4) the coordinative remote sensing observation technology is feasible for the high altitude barrier lake of Sarez.Keywords: coordinative observation, disaster, remote sensing, geographic information system, GIS
Procedia PDF Downloads 127389 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 90388 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253387 A Case of Prosthetic Vascular-Graft Infection Due to Mycobacterium fortuitum
Authors: Takaaki Nemoto
Abstract:
Case presentation: A 69-year-old Japanese man presented with a low-grade fever and fatigue that had persisted for one month. The patient had an aortic dissection on the aortic arch 13 years prior, an abdominal aortic aneurysm seven years prior, and an aortic dissection on the distal aortic arch one year prior, which were all treated with artificial blood-vessel replacement surgery. Laboratory tests revealed an inflammatory response (CRP 7.61 mg/dl), high serum creatinine (Cr 1.4 mg/dL), and elevated transaminase (AST 47 IU/L, ALT 45 IU/L). The patient was admitted to our hospital on suspicion of prosthetic vascular graft infection. Following further workups on the inflammatory response, an enhanced chest computed tomography (CT) and a non-enhanced chest DWI (MRI) were performed. The patient was diagnosed with a pulmonary fistula and a prosthetic vascular graft infection on the distal aortic arch. After admission, the patient was administered Ceftriaxion and Vancomycine for 10 days, but his fever and inflammatory response did not improve. On day 13 of hospitalization, a lung fistula repair surgery and an omental filling operation were performed, and Meropenem and Vancomycine were administered. The fever and inflammatory response continued, and therefore we took repeated blood cultures. M. fortuitum was detected in a blood culture on day 16 of hospitalization. As a result, we changed the treatment regimen to Amikacin (400 mg/day), Meropenem (2 g/day), and Cefmetazole (4 g/day), and the fever and inflammatory response began to decrease gradually. We performed a test of sensitivity for Mycobacterium fortuitum, and found that the MIC was low for fluoroquinolone antibacterial agent. The clinical course was good, and the patient was discharged after a total of 8 weeks of intravenous drug administration. At discharge, we changed the treatment regimen to Levofloxacin (500 mg/day) and Clarithromycin (800 mg/day), and prescribed these two drugs as a long life suppressive therapy. Discussion: There are few cases of prosthetic vascular graft infection caused by mycobacteria, and a standard therapy remains to be established. For prosthetic vascular graft infections, it is ideal to provide surgical and medical treatment in parallel, but in this case, surgical treatment was difficult and, therefore, a conservative treatment was chosen. We attempted to increase the treatment success rate of this refractory disease by conducting a susceptibility test for mycobacteria and treating with different combinations of antimicrobial agents, which was ultimately effective. With our treatment approach, a good clinical course was obtained and continues at the present stage. Conclusion: Although prosthetic vascular graft infection resulting from mycobacteria is a refractory infectious disease, it may be curative to administer appropriate antibiotics based on the susceptibility test in addition to surgical treatment.Keywords: prosthetic vascular graft infection, lung fistula, Mycobacterium fortuitum, conservative treatment
Procedia PDF Downloads 156386 Multicollinearity and MRA in Sustainability: Application of the Raise Regression
Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez
Abstract:
Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.Keywords: multicollinearity, MRA, interaction, raise
Procedia PDF Downloads 104385 Modulation of Receptor-Activation Due to Hydrogen Bond Formation
Authors: Sourav Ray, Christoph Stein, Marcus Weber
Abstract:
A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation
Procedia PDF Downloads 175384 The Mental Health Policy in the State of EspíRito Santo, Brazil: Judicialization
Authors: Fabiola Xavier Leal, Lara Campanharo, Sueli Aparecida Rodrigues Lucas
Abstract:
The phenomenon of judicialization in health policy brings with it a great deal of problematization, but in general, it means that some issues that were previously solved by traditional political bodies are being decided by the Judiciary bodies. It is, therefore, a controversial topic that has generated many reflections both in the academic and political fields, considering that not only a dispute of public funds is at stake, but also the debate on access to social rights provided for in the Brazilian Federal Constitution of 1988 and in the various public policies, such as healthcare. With regard to the phenomenon in the Mental Health Policy focusing on people who use drugs, the disputes that permeate this scenario are evident: moral, cultural, sanitary, economic, psychological aspects. There are also the individual and collective dimensions of suffering. And in this process, we all question: What is the role of the Brazilian State in this matter? In this context, another question that needs to be answered is the amount spent on this procedure in the state of Espírito Santo (ES), Brazil (in the last 04 years, around R$121,978,591.44 were paid only for compulsory hospitalization of individuals) in the field in question, which is the financing of the services of the Psychosocial Care Network (RAPS). Therefore, this article aims to problematize the phenomenon of judicialization in Mental Health Policy through the compulsory hospitalization of people who use drugs in Espírito Santo (ES). We proposed a study that sought to understand how this has been occurring and making an impact on the provision of RAPS services in the Espírito Santo scenario. Therefore, the general objective of this study is to analyze the expenses with compulsory hospitalizations for drug use carried out by the State Health Department (SESA) between 2014 and 2019, in which we will seek to identify its destination and the impact of these actions on public health policy. For the purposes of this article, we will present the preliminary data of this study, such as the amount spent by the state and the receiving institutions. For data collection, the following data sources were used: documents available publicly on the Transparency Portal (payments made per year, institutions that received, subjects hospitalized, period and the amount of the daily rates paid); as well as the processes generated by SESA through its own system - ONBASE. For qualitative analysis, content analysis was used; and for quantitative analysis, descriptive statistics was used. Thus, we seek to problematize the issue of judicialization for compulsory hospitalizations, considering the current situation in which this resource has been widely requested to legitimize the war on drugs. This scenario highlights the moral-legal discourse, pointing out strategies through the control of bodies and through faith as an alternative.Keywords: compulsory hospitalization, drugs, judicialization, mental health
Procedia PDF Downloads 170383 The Effect of Fish and Krill Oil on Warfarin Control
Authors: Rebecca Pryce, Nijole Bernaitis, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Background: Warfarin is an oral anticoagulant widely used in the prevention of strokes in patients with atrial fibrillation (AF) and in the treatment and prevention of deep vein thrombosis (DVT). Regular monitoring of Internationalised Normalised Ratio (INR) is required to ensure therapeutic benefit with time in therapeutic range (TTR) used to measure warfarin control. A number of factors influence TTR including diet, concurrent illness, and drug interactions. Extensive literature exists regarding the effect of conventional medicines on warfarin control, but documented interactions relating to complementary medicines are limited. It has been postulated that fish oil and krill oil supplementation may affect warfarin due to their association with bleeding events. However, to date little is known as to whether fish and krill oil significantly alter the incidence of bleeding with warfarin or impact on warfarin control. Aim:To assess the influence of fish oil and krill oil supplementation on warfarin control in AF and DVT patients by determining the influence of these supplements on TTR and bleeding events. Methods:A retrospective cohort analysis was conducted utilising patient information from a large private pathology practice in Queensland. AF and DVT patients receiving warfarin management by the pathology practice were identified and their TTR calculated using the Rosendaal method. Concurrent medications were analysed and patients taking no other interacting medicines were identified and divided according to users of fish oil and krill oil supplements and those taking no supplements. Study variables included TTR and the incidence of bleeding with exclusion criteria being less than 30 days of treatment with warfarin. Subject characteristics were reported as the mean and standard deviation for continuous data and number and percentages for nominal or categorical data. Data was analysed using GraphPad InStat Version 3 with a p value of <0.05 considered to be statistically significant. Results:Of the 2081 patients assessed for inclusion into this study, a total of 573 warfarin users met the inclusion criteria. Of these, 416 (72.6%) patients were AF patients and 157 (27.4%) DVT patients and overall there were 316 (55.1%) male and 257 (44.9%) female patients. 145 patients were included in the fish oil/krill oil group (supplement) and 428 were included in the control group. The mean TTR of supplement users was 86.9% and for the control group 84.7% with no significant difference between these groups. Control patients experienced 1.6 times the number of minor bleeds per person compared to supplement patients and 1.2 times the number of major bleeds per person. However, this was not statistically significant nor was the comparison between thrombotic events. Conclusion: No significant difference was found between supplement and control patients in terms of mean TTR, the number of bleeds and thrombotic events. Fish oil and krill oil supplements when used concurrently with warfarin do not significantly affect warfarin control as measured by TTR and bleeding incidence.Keywords: atrial fibrillation, deep vein thormbosis, fish oil, krill oil, warfarin
Procedia PDF Downloads 304382 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 302381 Modern Contraceptives versus Traditional Contraceptives and Abortion: An Ethnography of Fertiliy Control Practices in Burkina Faso
Authors: Seydou Drabo
Abstract:
This paper examines how traditional contraceptives and abortion practices challenges the use of modern contraceptives in Burkina Faso. It demonstrates how fears and ‘superstitions’ interact with knowledge about modern contraceptives methods to determine use in a context where other way of controlling fertility (traditional contraceptives, abortion) are available to women in the public, private and traditional health sectors. Furthermore, these issues come at the time when Burkina Faso is among countries with a high fertility rate which (6.0 in 2010) and a very low used of contraceptives as only 16% of married women of childbearing age were using a contraceptive method in 2010. The country also has a young population since 33 % of the population is between 10-24 years old and this number is expected to increase by 2050, generating fears that a growing population of youth will put excessive pressure on available resources, including access to education, health services, and employment. Despite over two decades of dedicated policy attention, 24% of women of reproductive age (15-49) was estimated to have an unmet need for contraception in 2010. This paper draws on ethnographic fieldwork conducted since march 2016 (The research is still in progress) in Burkina Faso. Data were collected from 25 women (users and non-users of modern contraceptives and /or traditional contraceptives, post abortion care patients), 4 street drugs vendors and 3 traditional healers through formal and informal interviews, as well as direct observation. The findings show that a variety of contraceptives methods and abortion drugs or methods, both traditional and modern circulate and are available to women. Traditional contraceptives called African contraceptives by some of our participants refer to several birth control method including plants decoction, magical ring, waist necklace, a ritual done with a mixture of lay coming from termite mound and menses. Abortion is a practice that is done in secret through the use of abortion drugs or through intra uterine manoeuvres. Modern contraceptives include Oral contraceptive, implants, injectable. Stereotypes about modern contraceptives, having regular menstrual cycles and adopt of natural birth control methods, bad experience with modern contraceptives methods, the side effect of modern contraceptives, irregularity of sexual activities and the availability of emergency contraceptives are among factors that limit their use among women. In addition, a negative perception is built around modern contraceptives seen as the drug of ‘white people’. In general, the information on these drugs circulates in women’s social network (first line of information on contraceptive). Some women prefer using what they call African contraceptives or inducing an abortion over modern contraceptives because of their side effect. Furthermore, the findings show that women practices and attitudes in controlling birth varies throughout different phases of their lives. Beyond global discourses and technical solution, the issue of Family planning is all about social practices.Keywords: abortion, Burkina Faso, contraception, culture, women
Procedia PDF Downloads 206380 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 571379 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India
Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar
Abstract:
Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth
Procedia PDF Downloads 260378 The Problems of Women over 65 with Incontinence Diagnosis: A Case Study in Turkey
Authors: Birsel Canan Demirbag, Kıymet Yesilcicek Calik, Hacer Kobya Bulut
Abstract:
Objective: This study was conducted to evaluate the problems of women over 65 with incontinence diagnosis. Methods: This descriptive study was conducted with women over 65 with incontinence diagnosis in four Family Health Centers in a city in Eastern Black Sea region between November 1, and December 20, 2015. 203, 107, 178, 180 women over 65 were registered in these centers and 262 had incontinence diagnosis at least once and had an ongoing complaint. 177 women were volunteers for the study. During home visits and using face-to-face survey methodology, participants were given socio-demographic characteristics survey, Sandvik severity scale, Incontinence Quality of Life Scale, Urogenital Distress Inventory and a questionnaire including challenges experienced due to incontinence developed by the researcher. Data were analyzed with SPSS program using percentages, numbers, Chi-square, Man-Whitney U and t test with 95% confidence interval and a significance level p <0.05. Findings: 67 ± 1.4 was the mean age, 2.05 ± 0.04 was parity, 44.5 ± 2.12 was menopause age, 66.3% were primary school graduates, 45.7% had deceased spouse, 44.4% lived in a large family, 67.2% had their own room, 77.8% had income, 89.2% could meet self- care, 73.2% had a diagnosis of mixed incontinence, 87.5% suffered for 6-20 years % 78.2 had diuretics, antidepressants and heart medicines, 20.5% had urinary fecal cases, 80.5% had bladder training at least once, 90.1% didn’t have bladder diary calendar/control training programs, 31.1% had hysterectomy for prolapse, 97.1'i% was treated with lower urinary tract infection at least once, 66.3% saw a doctor to get drug in the last three months, 76.2 could not go out alone, 99.2 % had at least one chronic disease, 87.6 % had constipation complain, 2.9% had chronic cough., 45.1% fell due to a sudden rise for toilet. Incontinence Impact Questionnaire Average score was (QOL) 54.3 ± 21.1, Sandvik score was 12.1 ± 2.5, Urogenital Distress Inventory was 47.7 ± 9.2. Difficulties experienced due to incontinence were 99.5% feeling of unhappiness, 67.1% constant feeling of urine smell due to failing to change briefs frequently, % 87.2 move away from social life, 89.7 unable to use pad, 99.2% feeling of disturbing households / other individuals, 87.5% feel dizziness/fall due to sudden rise, 87.4% feeling of others’ imperceptions about the situation, % 94.3 insomnia, 78.2 lack of assistance, 84.7% couldn’t afford urine protection briefs. Results: With this study, it was found out that there were a lot of unsolved issues at individual and community level affecting the life quality of women with incontinence. In accordance with this common problem in women, to facilitate daily life it is obvious that regular home care training programs at institutional level in our country will be effective.Keywords: health problems, incontinence, incontinence quality of life questionnaire, old age, urinary urogenital distress inventory, Sandviken severity, women
Procedia PDF Downloads 321377 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174376 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species
Authors: Helga Heylen
Abstract:
With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel
Procedia PDF Downloads 196375 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 88374 Polymer Matrices Based on Natural Compounds: Synthesis and Characterization
Authors: Sonia Kudlacik-Kramarczyk, Anna Drabczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: In the preparation of polymer materials, compounds of natural origin are currently gaining more and more interest. This is particularly noticeable in the case of synthesis of materials considered for biomedical use. Then, selected material has to meet many requirements. It should be characterized by non-toxicity, biodegradability and biocompatibility. Therefore special attention is directed to substances such as polysaccharides, proteins or substances that are the basic building components of proteins, i.e. amino acids. These compounds may be crosslinked with other reagents that leads to the preparation of polymer matrices. Such amino acids as e.g. cysteine or histidine. On the other hand, previously mentioned requirements may be met by polymers obtained as a result of biosynthesis, e.g. polyhydroxybutyrate. This polymer belongs to the group of aliphatic polyesters that is synthesized by microorganisms (selected strain of bacteria) under specific conditions. It is possible to modify matrices based on given polymer with substances of various origin. Such a modification may result in the change of their properties or/and in providing the material with new features desirable in viewpoint of specific application. Described materials are synthesized using UV radiation. Process of photopolymerization is fast, waste-free and enables to obtain final products with favorable properties. Methodology: Polymer matrices have been prepared by means of photopolymerization. First step involved the preparation of solutions of particular reagents and mixing them in the appropriate ratio. Next, crosslinking agent and photoinitiator have been added to the reaction mixture and the whole was poured into the Petri dish and treated with UV radiation. After the synthesis, polymer samples were dried at room temperature and subjected to the numerous analyses aimed at the determining their physicochemical properties. Firstly, sorption properties of obtained polymer matrices have been determined. Next, mechanical properties have been characterized, i.e. tensile strength. The ability to deformation under applied stress of all prepared polymer matrices has been checked. Such a property is important in viewpoint of the application of analyzed materials e.g. as wound dressings. Wound dressings have to be elastic because depending on the location of the wound and its mobility, such a dressing has to adhere properly to the wound. Furthermore, considering the use of the materials for biomedical purposes it is essential to determine its behavior in environments simulating these ones occurring in human body. Therefore incubation studies using selected liquids have also been conducted. Conclusions: As a result of photopolymerization process, polymer matrices based on natural compounds have been prepared. These exhibited favorable mechanical properties and swelling ability. Moreover, biocompatibility in relation to simulated body fluids has been stated. Therefore it can be concluded that analyzed polymer matrices constitute an interesting materials that may be considered for biomedical use and may be subjected to the further more advanced analyses using specific cell lines.Keywords: photopolymerization, polymer matrices, simulated body fluids, swelling properties
Procedia PDF Downloads 128373 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco
Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi
Abstract:
In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco
Procedia PDF Downloads 461372 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification
Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens
Abstract:
Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage
Procedia PDF Downloads 189