Search results for: injury prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2983

Search results for: injury prediction

103 Pathomorphological Markers of the Explosive Wave Action on Human Brain

Authors: Sergey Kozlov, Juliya Kozlova

Abstract:

Introduction: The increased attention of researchers to an explosive trauma around the world is associated with a constant renewal of military weapons and a significant increase in terrorist activities using explosive devices. Explosive wave is a well known damaging factor of explosion. The most sensitive to the action of explosive wave in the human body are the head brain, lungs, intestines, urine bladder. The severity of damage to these organs depends on the distance from the explosion epicenter to the object, the power of the explosion, presence of barriers, parameters of the body position, and the presence of protective clothing. One of the places where a shock wave acts, in human tissues and organs, is the vascular endothelial barrier, which suffers the greatest damage in the head brain and lungs. The objective of the study was to determine the pathomorphological changes of the head brain followed the action of explosive wave. Materials and methods of research: To achieve the purpose of the study, there have been studied 6 male corpses delivered to the morgue of Municipal Institution "Dnipropetrovsk regional forensic bureau" during 2014-2016 years. The cause of death of those killed was a military explosive injury. After a visual external assessment of the head brain, for histological study there was conducted the 1 x 1 x 1 cm/piece sampling from different parts of the head brain, i.e. the frontal, parietal, temporal, occipital sites, and also from the cerebellum, pons, medulla oblongata, thalamus, walls of the lateral ventricles, the bottom of the 4th ventricle. Pieces of the head brain were immersed in 10% formalin solution for 24 hours. After fixing, the paraffin blocks were made from the material using the standard method. Then, using a microtome, there were made sections of 4-6 micron thickness from paraffin blocks which then were stained with hematoxylin and eosin. Microscopic analysis was performed using a light microscope with x4, x10, x40 lenses. Results of the study: According to the results of our study, injuries of the head brain were divided into macroscopic and microscopic. Macroscopic injuries were marked according to the results of visual assessment of haemorrhages under the membranes and into the substance, their nature, and localisation, areas of softening. In the microscopic study, our attention was drawn to both vascular changes and those of neurons and glial cells. Microscopic qualitative analysis of histological sections of different parts of the head brain revealed a number of structural changes both at the cellular and tissue levels. Typical changes in most of the studied areas of the head brain included damages of the vascular system. The most characteristic microscopic sign was the separation of vascular walls from neuroglia with the formation of perivascular space. Along with this sign, wall fragmentation of these vessels, haemolysis of erythrocytes, formation of haemorrhages in the newly formed perivascular spaces were found. In addition to damages of the cerebrovascular system, destruction of the neurons, presence of oedema of the brain tissue were observed in the histological sections of the brain. On some sections, the head brain had a heterogeneous step-like or wave-like nature. Conclusions: The pathomorphological microscopic changes in the brain, identified in the study on the died of explosive traumas, can be used for diagnostic purposes in conjunction with other characteristic signs of explosive trauma in forensic and pathological studies. The complex of microscopic signs in the head brain, i.e. separation of blood vessel walls from neuroglia with the perivascular space formation, fragmentation of walls of these blood vessels, erythrocyte haemolysis, formation of haemorrhages in the newly formed perivascular spaces is the direct indication of explosive wave action.

Keywords: blast wave, neurotrauma, human, brain

Procedia PDF Downloads 168
102 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 67
101 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 165
100 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir

Authors: Worawanna Panyakotkaew, Falan Srisuriyachai

Abstract:

Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.

Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery

Procedia PDF Downloads 440
99 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership

Authors: Markéta Chmelíková, David Ullrich, Iva Burešová

Abstract:

The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.

Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic

Procedia PDF Downloads 73
98 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 103
97 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar

Authors: Gary Peach, Furqan Hameed

Abstract:

Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.

Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey

Procedia PDF Downloads 208
96 Effect of Compaction Method on the Mechanical and Anisotropic Properties of Asphalt Mixtures

Authors: Mai Sirhan, Arieh Sidess

Abstract:

Asphaltic mixture is a heterogeneous material composed of three main components: aggregates; bitumen and air voids. The professional experience and scientific literature categorize asphaltic mixture as a viscoelastic material, whose behavior is determined by temperature and loading rate. Properties characterization of the asphaltic mixture used under the service conditions is done by compacting and testing cylindric asphalt samples in the laboratory. These samples must resemble in a high degree internal structure of the mixture achieved in service, and the mechanical characteristics of the compacted asphalt layer in the pavement. The laboratory samples are usually compacted in temperatures between 140 and 160 degrees Celsius. In this temperature range, the asphalt has a low degree of strength. The laboratory samples are compacted using the dynamic or vibrational compaction methods. In the compaction process, the aggregates tend to align themselves in certain directions that lead to anisotropic behavior of the asphaltic mixture. This issue has been studied in the Strategic Highway Research Program (SHRP) research, that recommended using the gyratory compactor based on the assumption that this method is the best in mimicking the compaction in the service. In Israel, the Netivei Israel company is considering adopting the Gyratory Method as a replacement for the Marshall method used today. Therefore, the compatibility of the Gyratory Method for the use with Israeli asphaltic mixtures should be investigated. In this research, we aimed to examine the impact of the compaction method used on the mechanical characteristics of the asphaltic mixtures and to evaluate the degree of anisotropy in relation to the compaction method. In order to carry out this research, samples have been compacted in the vibratory and gyratory compactors. These samples were cylindrically cored both vertically (compaction wise) and horizontally (perpendicular to compaction direction). These models were tested under dynamic modulus and permanent deformation tests. The comparable results of the tests proved that: (1) specimens compacted by the vibratory compactor had higher dynamic modulus values than the specimens compacted by the gyratory compactor (2) both vibratory and gyratory compacted specimens had anisotropic behavior, especially in high temperatures. Also, the degree of anisotropy is higher in specimens compacted by the gyratory method. (3) Specimens compacted by the vibratory method that were cored vertically had the highest resistance to rutting. On the other hand, specimens compacted by the vibratory method that were cored horizontally had the lowest resistance to rutting. Additionally (4) these differences between the different types of specimens rise mainly due to the different internal arrangement of aggregates resulting from the compaction method. (5) Based on the initial prediction of the performance of the flexible pavement containing an asphalt layer having characteristics based on the results achieved in this research. It can be concluded that there is a significant impact of the compaction method and the degree of anisotropy on the strains that develop in the pavement, and the resistance of the pavement to fatigue and rutting defects.

Keywords: anisotropy, asphalt compaction, dynamic modulus, gyratory compactor, mechanical properties, permanent deformation, vibratory compactor

Procedia PDF Downloads 100
95 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 246
94 Toward Understanding the Glucocorticoid Receptor Network in Cancer

Authors: Swati Srivastava, Mattia Lauriola, Yuval Gilad, Adi Kimchi, Yosef Yarden

Abstract:

The glucocorticoid receptor (GR) has been proposed to play important, but incompletely understood roles in cancer. Glucocorticoids (GCs) are widely used as co-medication of various carcinomas, due to their ability to reduce the toxicity of chemotherapy. Furthermore, GR antagonism has proven to be a strategy to treat triple negative breast cancer and castration-resistant prostate cancer. These observations suggest differential GR involvement in cancer subtypes. The goal of our study has been to elaborate the current understanding of GR signaling in tumor progression and metastasis. Our study involves two cellular models, non-tumorigenic breast epithelial cells (MCF10A) and Ewing sarcoma cells (CHLA9). In our breast cell model, the results indicated that the GR agonist dexamethasone inhibits EGF-induced mammary cell migration, and this effect was blocked when cells were stimulated with a GR antagonist, namely RU486. Microarray analysis for gene expression revealed that the mechanism underlying inhibition involves dexamenthasone-mediated repression of well-known activators of EGFR signaling, alongside with enhancement of several EGFR’s negative feedback loops. Because GR mainly acts primarily through composite response elements (GREs), or via a tethering mechanism, our next aim has been to find the transcription factors (TFs) which can interact with GR in MCF10A cells.The TF-binding motif overrepresented at the promoter of dexamethasone-regulated genes was predicted by using bioinformatics. To validate the prediction, we performed high-throughput Protein Complementation Assays (PCA). For this, we utilized the Gaussia Luciferase PCA strategy, which enabled analysis of protein-protein interactions between GR and predicted TFs of mammary cells. A library comprising both nuclear receptors (estrogen receptor, mineralocorticoid receptor, GR) and TFs was fused to fragments of GLuc, namely GLuc(1)-X, X-GLuc(1), and X-GLuc(2), where GLuc(1) and GLuc(2) correspond to the N-terminal and C-terminal fragments of the luciferase gene.The resulting library was screened, in human embryonic kidney 293T (HEK293T) cells, for all possible interactions between nuclear receptors and TFs. By screening all of the combinations between TFs and nuclear receptors, we identified several positive interactions, which were strengthened in response to dexamethasone and abolished in response to RU486. Furthermore, the interactions between GR and the candidate TFs were validated by co-immunoprecipitation in MCF10A and in CHLA9 cells. Currently, the roles played by the uncovered interactions are being evaluated in various cellular processes, such as cellular proliferation, migration, and invasion. In conclusion, our assay provides an unbiased network analysis between nuclear receptors and other TFs, which can lead to important insights into transcriptional regulation by nuclear receptors in various diseases, in this case of cancer.

Keywords: epidermal growth factor, glucocorticoid receptor, protein complementation assay, transcription factor

Procedia PDF Downloads 204
93 Transient Heat Transfer: Experimental Investigation near the Critical Point

Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut

Abstract:

In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.

Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators

Procedia PDF Downloads 200
92 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 44
91 Unidentified Remains with Extensive Bone Disease without a Clear Diagnosis

Authors: Patricia Shirley Almeida Prado, Selma Paixão Argollo, Maria De Fátima Teixeira Guimarães, Leticia Matos Sobrinho

Abstract:

Skeletal differential diagnosis is essential in forensic anthropology in order to differentiate skeletal trauma from normal osseous variation and pathological processes. Thus, part of forensic anthropological field is differentiate skeletal criminal injuries from the normal skeletal variation (bone fusion or nonunion, transitional vertebrae and other non-metric traits), non-traumatic skeletal pathology (myositis ossificans, arthritis, bone metastasis, osteomyelitis) from traumatic skeletal pathology (myositis ossificans traumatic) avoiding misdiagnosis. This case shows the importance of effective pathological diagnosis in order to accelerate the identification process of skeletonized human remains. THE CASE: An unidentified skeletal remains at the medico legal institute Nina Rodrigues-Salvador, of a male young adult (29 to 40 years estimated) showing a massive heterotopic ossification on its right tibia at upper epiphysis and adjacent articular femur surface; an extensive ossification on the right clavicle (at the sternal extremity) also presenting an heterotopic ossification at right scapulae (upper third of scapulae lateral margin and infraglenoid tubercule) and at the head of right humerus at the shoulder joint area. Curiously, this case also shows an unusual porosity in certain vertebrae´s body and in some tarsal and carpal bones. Likewise, his left fifth metacarpal bones (right and left) showed a healed fracture which led both bones distorted. Based on identification, of pathological conditions in human skeletal remains literature and protocols these alterations can be misdiagnosed and this skeleton may present more than one pathological process. The anthropological forensic lab at Medico-legal Institute Nina Rodrigues in Salvador (Brazil) adopts international protocols to ancestry, sex, age and stature estimations, also implemented well-established conventions to identify pathological disease and skeletal alterations. The most compatible diagnosis for this case is hematogenous osteomyelitis due to following findings: 1: the healed fracture pattern at the clavicle showing a cloaca which is a pathognomonic for osteomyelitis; 2: the metacarpals healed fracture does not present cloaca although they developed a periosteal formation. 3: the superior articular surface of the right tibia shows an extensive inflammatory healing process that extends to adjacent femur articular surface showing some cloaca at tibia bone disease. 4: the uncommon porosities may result from hematogenous infectious process. The fractures probably have occurred in a different moments based on the healing process; the tibia injury is more extensive and has not been reorganized, while metacarpals and clavicle fracture is properly healed. We suggest that the clavicle and tibia´s fractures were infected by an existing infectious disease (syphilis, tuberculosis, brucellosis) or an existing syndrome (Gorham’s disease), which led to the development of osteomyelitis. This hypothesis is supported by the fact that different bones are affected in diverse levels. Like the metacarpals that do not show the cloaca, but then a periosteal new bone formation; then the unusual porosities do not show a classical osteoarthritic processes findings as the marginal osteophyte, pitting and new bone formation, they just show an erosive process without bone formation or osteophyte. To confirm and prove our hypothesis we are working on different clinical approaches like DNA, histopathology and other image exams to find the correct diagnostic.

Keywords: bone disease, forensic anthropology, hematogenous osteomyelitis, human identification, human remains

Procedia PDF Downloads 304
90 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 122
89 Birth Weight, Weight Gain and Feeding Pattern as Predictors for the Onset of Obesity in School Children

Authors: Thimira Pasas P, Nirmala Priyadarshani M, Ishani R

Abstract:

Obesity is a global health issue. Early identification is essential to plan interventions and intervene than to reduce the worsening of obesity and its consequences on the health issues of the individual. Childhood obesity is multifactorial, with both modifiable and unmodifiable risk factors. A genetically susceptible individual (unmodifiable), when placed in an obesogenic environment (modifiable), is likely to become obese in onset and progression. The present study was conducted to identify the age of onset of childhood obesity and the influence of modifiable risk factors for childhood obesity among school children living in a suburban area of Sri Lanka. The study population was aged 11-12 years of Piliyandala Educational Zone. Data were collected from 11–12-year-old school children attending government schools in the Piliyandala Educational Zone. They were using a validated, pre-tested self-administered questionnaire. A stratified random sampling method was performed to select schools and to select a representative sample to include all 3 types of government schools of students due to the prevailing pandemic situation, information from the last school medical inspection on data from 2020used for this purpose. For each obese child identified, 2 non-obese children were selected as controls. A single representative from the area was selected by using a systematic random sampling method with a sampling interval of 3. Data was collected using a validated, pre-tested self-administered questionnaire and the Child Health Development Record of the child. An introduction, which included explanations and instructions for filing the questionnaire, was carried out as a group activity prior to distributing the questionnaire among the sample. The results of the present study aligned with the hypothesis that the age of onset of childhood obesity and prediction must be within the first two years of child life. A total of 130 children (66 males: 64 females) participated in the study. The age of onset of obesity was seen to be within the first two years of life. The risk of obesity at 11-12 years of age was Obesity risk was identified at 3-time s higher among females who underwent rapid weight gain within their infancy period. Consuming milk prior to breakfast emerged as a risk factor that increases the risk of obesity by three times. The current study found that the drink before breakfast tends to increase the obesity risk by 3-folds, especially among obese females. Proper monitoring must be carried out to identify the rapid weight gain, especially within the first 2 years of life. Consumption of mug milk before breakfast tends to increase the obesity risk by 3 times. Identification of the confounding factors, proper awareness of the mothers/guardians and effective proper interventions need to be carried out to reduce the obesity risk among school children in the future.

Keywords: childhood obesity, school children, age of onset, weight gain, feeding pattern, activity level

Procedia PDF Downloads 118
88 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 155
87 Monitoring the Responses to Nociceptive Stimuli During General Anesthesia Based on Electroencephalographic Signals in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway (LMA)

Authors: Ofelia Loani Elvir Lazo, Roya Yumul, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

Background: Monitoring the anti-nociceptive drug effect is useful because a sudden and strong nociceptive stimulus may result in untoward autonomic responses and muscular reflex movements. Monitoring the anti-nociceptive effects of perioperative medications has long been desiredas a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively.To this end, electroencephalogram (EEG) based tools includingBIS and qCON were designed to provide information about the depth of sedation whileqNOXwas produced to informon the degree of antinociception.The goal of this study was to compare the reliability of qCON/qNOX to BIS asspecific indicators of response to nociceptive stimulation. Methods: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board(IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed o62n all patientsprior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. Results: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from74±13 mm Hg at baseline to 84± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76±12 BPM at baseline to 80±13BPM during noxious stimuli[p=0.078] respectively). Conclusion: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices.

Keywords: antinociception, bispectral index (BIS), general anesthesia, laryngeal mask airway, qCON/qNOX

Procedia PDF Downloads 73
86 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 196
85 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 48
84 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions

Authors: Christian J. Kennett, Mark A. Baldwin

Abstract:

As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.

Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading

Procedia PDF Downloads 92
83 Management of Non-Revenue Municipal Water

Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu

Abstract:

The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.

Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks

Procedia PDF Downloads 371
82 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications

Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray

Abstract:

The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.

Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model

Procedia PDF Downloads 98
81 Feasibility and Acceptability of an Emergency Department Digital Pain Self-Management Intervention: An Randomized Controlled Trial Pilot Study

Authors: Alexandria Carey, Angela Starkweather, Ann Horgas, Hwayoung Cho, Jason Beneciuk

Abstract:

Background/Significance: Over 3.4 million acute axial low back pain (aLBP) cases are treated annually in the United States (US) emergency departments (ED). ED patients with aLBP receive varying verbal and written discharge routine care (RC), leading to ineffective patient self-management. Ineffective self-management increase chronic low back pain (cLPB) transition risks, a chief cause of worldwide disability, with associated costs >$60 million annually. This research addresses this significant problem by evaluating an ED digital pain self-management intervention (EDPSI) focused on improving self-management through improved knowledge retainment, skills, and self-efficacy (confidence) (KSC) thus reducing aLBP to cLBP transition in ED patients discharged with aLBP. The research has significant potential to increase self-efficacy, one of the most potent mechanisms of behavior change and improve health outcomes. Focusing on accessibility and usability, the intervention may reduce discharge disparities in aLBP self-management, especially with low health literacy. Study Questions: This research will answer the following questions: 1) Will an EDPSI focused on improving KSC progress patient self-management behaviors and health status?; 2) Is the EDPSI sustainable to improve pain severity, interference, and pain recurrence?; 3) Will an EDPSI reduce aLBP to cLBP transition in patients discharged with aLBP? Aims: The pilot randomized-controlled trial (RCT) study’s objectives assess the effects of a 12-week digital self-management discharge tool in patients with aLBP. We aim to 1) Primarily assess the feasibility [recruitment, enrollment, and retention], and [intervention] acceptability, and sustainability of EDPSI on participant’s pain self-management; 2) Determine the effectiveness and sustainability of EDPSI on pain severity/interference among participants. 3) Explore patient preferences, health literacy, and changes among participants experiencing the transition to cLBP. We anticipate that EDPSI intervention will increase likelihood of achieving self-management milestones and significantly improve pain-related symptoms in aLBP. Methods: The study uses a two-group pilot RCT to enroll 30 individuals who have been seen in the ED with aLBP. Participants are randomized into RC (n=15) or RC + EDPSI (n=15) and receive follow-up surveys for 12-weeks post-intervention. EDPSI innovative content focuses on 1) highlighting discharge education; 2) provides self-management treatment options; 3) actor demonstration of ergonomics, range of motion movements, safety, and sleep; 4) complementary alternative medicine (CAM) options including acupuncture, yoga, and Pilates; 5) combination therapies including thermal application, spinal manipulation, and PT treatments. The intervention group receives Booster sessions via Zoom to assess and reinforce their knowledge retention of techniques and provide return demonstration reinforcing ergonomics, in weeks two and eight. Outcome Measures: All participants are followed for 12-weeks, assessing pain severity/ interference using the Brief Pain Inventory short-form (BPI-sf) survey, self-management (measuring KSC) using the short 13-item Patient Activation Measure (PAM), and self-efficacy using the Pain Self-Efficacy Questionnaire (PSEQ) weeks 1, 6, and 12. Feasibility is measured by recruitment, enrollment, and retention percentages. Acceptability and education satisfaction are measured using the Education-Preference and Satisfaction Questionnaire (EPSQ) post-intervention. Self-management sustainment is measured including PSEQ, PAM, and patient satisfaction and healthcare utilization (PSHU) requesting patient overall satisfaction, additional healthcare utilization, and pain management related to continued back pain or complications post-injury.

Keywords: digital, pain self-management, education, tool

Procedia PDF Downloads 19
80 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 95
79 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 120
78 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 597
77 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 83
76 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 36
75 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 54
74 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 116