Search results for: multiple group analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35195

Search results for: multiple group analysis

28475 Applying Hybrid Graph Drawing and Clustering Methods on Stock Investment Analysis

Authors: Mouataz Zreika, Maria Estela Varua

Abstract:

Stock investment decisions are often made based on current events of the global economy and the analysis of historical data. Conversely, visual representation could assist investors’ gain deeper understanding and better insight on stock market trends more efficiently. The trend analysis is based on long-term data collection. The study adopts a hybrid method that combines the Clustering algorithm and Force-directed algorithm to overcome the scalability problem when visualizing large data. This method exemplifies the potential relationships between each stock, as well as determining the degree of strength and connectivity, which will provide investors another understanding of the stock relationship for reference. Information derived from visualization will also help them make an informed decision. The results of the experiments show that the proposed method is able to produced visualized data aesthetically by providing clearer views for connectivity and edge weights.

Keywords: clustering, force-directed, graph drawing, stock investment analysis

Procedia PDF Downloads 290
28474 Railway Accidents: Using the Global Railway Accident Database and Evaluation for Risk Analysis

Authors: Mathias Linden, André Schneider, Harald F. O. von Korflesch

Abstract:

The risk of train accidents is an ongoing concern for railway organizations, governments, insurance companies and other depended sectors. Safety technologies are installed to reduce and to prevent potential damages of train accidents. Since the budgetary for the safety of railway organizations is limited, it is necessary not only to achieve a high availability and high safety standard but also to be cost effective. Therefore, an economic assessment of safety technologies is fundamental to create an accurate risk analysis. In order to conduct an economical assessment of a railway safety technology and a quantification of the costs of the accident causes, the Global Railway Accident Database & Evaluation (GRADE) has been developed. The aim of this paper is to describe the structure of this accident database and to show how it can be used for risk analyses. A number of risk analysis methods, such as the probabilistic safety assessment method (PSA), was used to demonstrate this accident database’s different possibilities of risk analysis. In conclusion, it can be noted that these analyses would not be as accurate without GRADE. The information gathered in the accident database was not available in this way before. Our findings are relevant for railway operators, safety technology suppliers, assurances, governments and other concerned railway organizations.

Keywords: accident causes, accident costs, accident database, global railway accident database & evaluation, GRADE, probabilistic safety assessment, PSA, railway accidents, risk analysis

Procedia PDF Downloads 344
28473 Study of Hypertension at Sohag City: Upper Egypt Experience

Authors: Aly Kassem, Eman Sapet, Eman Abdelbaset, Hosam Mahmoud

Abstract:

Objective: Hypertension is an important public health challenge being one of the most common worldwide disease-affecting human. Our aim is to study the clinical characteristics, therapeutic regimens, treatment compliance, and risk factors in a sector of of hypertensive patients at Sohag City. Subject and Methods: A cross sectional study; conducted in Sohag city; it involved 520 patients; males (45.7 %) and females (54.3 %). Their ages ranged between 35-85 years. BP measurements, BMI, blood glucose, Serum creatinine, urine analysis, serum Lipids, blood picture and ECG were done all the studied patients. Results: Hypertension presented more between non-smokers (72.55%), females (54.3%), educated patients (50.99%) and patients with low SES (54.9%). CAD presented in (51.63%) of patients, while laboratory investigations showed hyperglycaemia in (28.7%), anemia in (18.3%), high serum creatinine level in (8.49%) and proteinuria in (10.45%) of patient. Adequate BP control was achieved in (49.67%); older patients had lower adequacy of BP control in spite of the extensive use of multiple-drug therapy. Most hypertensive patients had more than one coexistent CV risk factor. Aging, being a female (54.3%), DM (32.3%), family history of hypertension (28.7%), family history of CAD (25.4%), and obesity (10%) were the common contributing risk factors. ACE-inhibitors were prescribed in (58.16%), Beta-blockers in (34.64%) of the patients. Monotherapy was prescribed for (41.17%) of the patients. (75.81%) of patients had regular use of their drug regimens. (49.67%) only of patients had their condition under control, the number of drugs was inversely related to BP control. Conclusion: Hypertensive patients in Sohag city had a profile of high CV risks, and poor blood pressure control particularly in the elderly. A multidisciplinary approach for routine clinical check-up, follow-up, physicians and patients training, prescribing simple once-daily regimens and encouraging life style modifications are recommended. Anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance.

Keywords: anti hypertensives, hypertension, elderly patients, risk factors, treatment compliance

Procedia PDF Downloads 285
28472 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 272
28471 Metabolic Pathway Analysis of Microbes using the Artificial Bee Colony Algorithm

Authors: Serena Gomez, Raeesa Tanseen, Netra Shaligram, Nithin Francis, Sandesh B. J.

Abstract:

The human gut consists of a community of microbes which has a lot of effects on human health disease. Metabolic modeling can help to predict relative populations of stable microbes and their effect on health disease. In order to study and visualize microbes in the human gut, we developed a tool that offers the following modules: Build a tool that can be used to perform Flux Balance Analysis for microbes in the human gut using the Artificial Bee Colony optimization algorithm. Run simulations for an individual microbe in different conditions, such as aerobic and anaerobic and visualize the results of these simulations.

Keywords: microbes, metabolic modeling, flux balance analysis, artificial bee colony

Procedia PDF Downloads 83
28470 Sociolinguistic and Critical Discourse Analysis of Nigerian Proverbs: The Differences between the Representation of the Genders

Authors: Crescentia Ugwuona

Abstract:

Considering the importance of proverbs in socio-cultural life through socialization in any given society, it is deemed important for people to understand the hidden meanings that proverbs may convey. So far, there has been hardly any systematic research in the representation of different genders in Nigeria. Although there are writings on the representation of women in Nigerian proverbs, they are based on the writers’ introspection. Beyond that, investigators often tend to overlook the representations of men in proverbs. This study therefore explores from the perspective of sociolinguistics and critical discourse analysis (CDA) how different genders (men and women) are represented in Nigerian proverbs with particular reference to Igbo-Nigerians; with the aim of uncovering hidden gender inequalities that exist in them. The analysis reveals that Igbo proverbs consistently perpetuate an ideology of gender inequality, that is, male proverbs depict male achievements, power, bravery, and male supremacy; while that of female connotes their submissions to cultural and traditional female domestic roles, chastity, less competent, and women subjugation. The study alerts to how gendered language in proverbs can reflect, create, and sustain gender inequality in societies; and contributes to an education aimed at gender equality, emancipator practice of appropriate language in proverbs, respect for human rights; and of the need to develop strategies for addressing the problem.

Keywords: critical discourse analysis, gender representation, gender stereotypes, Igbo-Nigerian, sociolinguistics analysis, proverbs

Procedia PDF Downloads 257
28469 Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese

Authors: Leonardo Alfredo Mendoza, Cristian Munoz, Marco Aurelio Pacheco, Manoela Kohler, Evelyn Batista, Rodrigo Moura

Abstract:

Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%.

Keywords: natural processing language, sentiment analysis, bidirectional long short-term memory, BI-LSTM, gated recurrent unit, GRU

Procedia PDF Downloads 138
28468 Magnetofluidics for Mass Transfer and Mixing Enhancement in a Micro Scale Device

Authors: Majid Hejazian, Nam-Trung Nguyen

Abstract:

Over the past few years, microfluidic devices have generated significant attention from industry and academia due to advantages such as small sample volume, low cost and high efficiency. Microfluidic devices have applications in chemical, biological and industry analysis and can facilitate assay of bio-materials and chemical reactions, separation, and sensing. Micromixers are one of the important microfluidic concepts. Micromixers can work as stand-alone devices or be integrated in a more complex microfluidic system such as a lab on a chip (LOC). Micromixers are categorized as passive and active types. Passive micromixers rely only on the arrangement of the phases to be mixed and contain no moving parts and require no energy. Active micromixers require external fields such as pressure, temperature, electric and acoustic fields. Rapid and efficient mixing is important for many applications such as biological, chemical and biochemical analysis. Achieving fast and homogenous mixing of multiple samples in the microfluidic devices has been studied and discussed in the literature recently. Improvement in mixing rely on effective mass transport in microscale, but are currently limited to molecular diffusion due to the predominant laminar flow in this size scale. Using magnetic field to elevate mass transport is an effective solution for mixing enhancement in microfluidics. The use of a non-uniform magnetic field to improve mass transfer performance in a microfluidic device is demonstrated in this work. The phenomenon of mixing ferrofluid and DI-water streams has been reported before, but mass transfer enhancement for other non-magnetic species through magnetic field have not been studied and evaluated extensively. In the present work, permanent magnets were used in a simple microfluidic device to create a non-uniform magnetic field. Two streams are introduced into the microchannel: one contains fluorescent dye mixed with diluted ferrofluid to induce enhanced mass transport of the dye, and the other one is a non-magnetic DI-water stream. Mass transport enhancement of fluorescent dye is evaluated using fluorescent measurement techniques. The concentration field is measured for different flow rates. Due to effect of magnetic field, a body force is exerted on the paramagnetic stream and expands the ferrofluid stream into non-magnetic DI-water flow. The experimental results demonstrate that without a magnetic field, both magnetic nanoparticles of the ferrofluid and the fluorescent dye solely rely on molecular diffusion to spread. The non-uniform magnetic field, created by the permanent magnets around the microchannel, and diluted ferrofluid can improve mass transport of non-magnetic solutes in a microfluidic device. The susceptibility mismatch between the fluids results in a magnetoconvective secondary flow towards the magnets and subsequently the mass transport of the non-magnetic fluorescent dye. A significant enhancement in mass transport of the fluorescent dye was observed. The platform presented here could be used as a microfluidics-based micromixer for chemical and biological applications.

Keywords: ferrofluid, mass transfer, micromixer, microfluidics, magnetic

Procedia PDF Downloads 208
28467 Civilization and Violence: Islam, the West, and the Rest

Authors: Imbesat Daudi

Abstract:

One of the most discussed topics of the last century happens to be if Islamic civilization is violent. Many Western intellectuals have promoted the notion that Islamic civilization is violent. Citing 9/11, in which 3000 civilians were killed, they argue that Muslims are prone to violence because Islam promotes violence. However, Muslims reject this notion as nonsense. This topic has not been properly addressed. First, violence of civilizations cannot be proven by citing religious texts, which have been used in discussions over civilizational violence. Secondly, the question of whether Muslims are violent is inappropriate, as there is implicit bias suggesting that Islamic civilization is violent. A proper question should be which civilization is more violent. Third, whether Islamic civilization is indeed violent can only be established if more war-related casualties can be documented within the borders of Islamic civilization than that of their cohorts. This has never been done. Finally, the violent behavior of Muslim countries can be examined by comparing acts of violence committed by Muslim countries with acts of violence of groups of nations belonging to other civilizations by appropriate parameters of violence. Therefore, parameters reflecting group violence have been defined; violent conflicts of various civilizations of the last two centuries were documented, quantified by number of conflicts and number of victims, and compared with each other by following the established principles of statistics. The results show that whereas 80% of genocides and massacres were conducted by Western nations, less than 5% of acts of violence were committed by Muslim countries. Furthermore, the West has the highest incidence (new) and prevalence (new and old) of violent conflicts among all groups of nations. The result is unambiguous and statistically significant. Becoming informed can only be done by a methodical collection of relevant data, objective analysis of data, and unbiased information, a process which this paper follows.

Keywords: Islam and violence, demonization of Muslims, violence and the West, comparison of civilizational violence

Procedia PDF Downloads 39
28466 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 104
28465 Place of Surgery in the Treatment of Painful Lumbar Degenerative Disc Disease

Authors: Ghoul Rachid Brahim

Abstract:

Introduction: Back pain is a real public health problem with a significant socio-economic impact. It is the consequence of a degeneration of the lumbar intervertebral disc (IVD). This often asymptomatic pathology is compatible with an active life. As soon as it becomes symptomatic, conservative treatment is recommended in the majority of cases. The physical or functional disability is resistant to well-monitored conservative treatment, which justifies a surgical alternative which imposes a well-studied reflection on the objectives to be achieved. Objective: Evaluate the indication and short and medium term contribution of surgery in the management of painful degenerative lumbar disc disease. To prove the effectiveness of surgical treatment in the management of painful lumbar degenerative disc disease. Materials and methods: This is a prospective descriptive mono-centric study without comparison group, comprising a series of 104 patients suffering from lumbar painful degenerative disc disease treated surgically. Retrospective analysis of data collected prospectively. Comparison between pre and postoperative clinical status, by pain self-assessment scores and on the impact on pre and postoperative quality of life (3, 6 to 12 months). Results: This study showed that patients who received surgical treatment had great improvements in symptoms, function and several health-related quality of life in the first year after surgery. Conclusions: The surgery had a significantly positive impact on patients' pain, disability and quality of life. Overall, 97% of the patients were satisfied.

Keywords: degenerative disc disease, intervertebral disc, several health-related quality, lumbar painful

Procedia PDF Downloads 90
28464 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 155
28463 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers

Authors: Margarita Dufresne

Abstract:

This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.

Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel

Procedia PDF Downloads 45
28462 Analysis of Structure-Flow Interaction for Water Brake Mechanism

Authors: Murat Avci, Fatih Kosar, Ismail Yilmaz

Abstract:

In this study, structure-flow interaction for water brake mechanism is studied with Abaqus CEL approach. The water brake mechanism is used for dynamic systems such as sled system on rail. For the achievement of these system tests, structure-flow interaction should be investigated in detail. This study is about a sled test of an aircraft subsystem which rises to supersonic speeds thanks to rocket engines. To decrease or to stop the thrusting rocket sleds, water brake mechanisms are used. Water brake mechanism provides the deceleration of the structures that have supersonic speeds. Therefore, structure-flow interaction may cause damage to the water brake mechanism. To verify all design revisions with system tests are so costly so that some decisions are taken in accordance with numerical methods. In this study, structure-flow interaction that belongs to water brake mechanism is solved with Abaqus CEL approach. Fluid and deformation on the structure behaviors are modeled at the same time thanks to CEL approach. Provided analysis results are corrected with the dynamic tests. Deformation zones seen in numerical analysis are also observed in dynamic tests. Finally, Johnson-Cook material model parameters used for this analysis are proven, and it is understood that these parameters can be used for dynamic analysis like water brake mechanism.

Keywords: aircraft, rocket, structure-flow, supersonic

Procedia PDF Downloads 142
28461 Comparison of Deep Brain Stimulation Targets in Parkinson's Disease: A Systematic Review

Authors: Hushyar Azari

Abstract:

Aim and background: Deep brain stimulation (DBS) is regarded as an important therapeutic choice for Parkinson's disease (PD). The two most common targets for DBS are the subthalamic nucleus (STN) and globus pallidus (GPi). This review was conducted to compare the clinical effectiveness of these two targets. Methods: A systematic literature search in electronic databases: Embase, Cochrane Library and PubMed were restricted to English language publications 2010 to 2021. Specified MeSH terms were searched in all databases. Studies which evaluated the Unified Parkinson's Disease Rating Scale (UPDRS) III were selected by meeting the following criteria: (1) compared both GPi and STN DBS; (2) had at least three months follow-up period; (3)at least five participants in each group; (4)conducted after 2010. Study quality assessment was performed using the Modified Jadad Scale. Results: 3577 potentially relevant articles were identified, of these, 3569 were excluded based on title and abstract, duplicate and unsuitable article removal. Eight articles satisfied the inclusion criteria and were scrutinized (458 PD patients). According to Modified Jadad Scale, the majority of included studies had low evidence quality which was a limitation of this review. 5 studies reported no statistically significant between-group difference for improvements in UPDRS ш scores. At the same time, there were some results in terms of pain, action tremor, rigidity, and urinary symptoms, which indicated that STN DBS might be a better choice. Regarding the adverse effects, GPi was superior. Conclusion: It is clear that other larger randomized clinical trials with longer follow-up periods and control groups are needed to decide which target is more efficient for deep brain stimulation in Parkinson’s disease and imposes fewer adverse effects on the patients. Meanwhile, STN seems more reasonable according to the results of this systematic review.

Keywords: brain stimulation, globus pallidus, Parkinson's disease, subthalamic nucleus

Procedia PDF Downloads 170
28460 Database Management System for Orphanages to Help Track of Orphans

Authors: Srivatsav Sanjay Sridhar, Asvitha Raja, Prathit Kalra, Soni Gupta

Abstract:

Database management is a system that keeps track of details about a person in an organisation. Not a lot of orphanages these days are shifting to a computer and program-based system, but unfortunately, most have only pen and paper-based records, which not only consumes space but it is also not eco-friendly. It comes as a hassle when one has to view a record of a person as they have to search through multiple records, and it will consume time. This program will organise all the data and can pull out any information about anyone whose data is entered. This is also a safe way of storage as physical data gets degraded over time or, worse, destroyed due to natural disasters. In this developing world, it is only smart enough to shift all data to an electronic-based storage system. The program comes with all features, including creating, inserting, searching, and deleting the data, as well as printing them.

Keywords: database, orphans, programming, C⁺⁺

Procedia PDF Downloads 129
28459 Investigating Spatial Disparities in Health Status and Access to Health-Related Interventions among Tribals in Jharkhand

Authors: Parul Suraia, Harshit Sosan Lakra

Abstract:

Indigenous communities represent some of the most marginalized populations globally, with India labeled as tribals, experiencing particularly pronounced marginalization and a concerning decline in their numbers. These communities often inhabit geographically challenging regions characterized by low population densities, posing significant challenges to providing essential infrastructure services. Jharkhand, a Schedule 5 state, is infamous for its low-level health status due to disparities in access to health care. The primary objective of this study is to investigate the spatial inequalities in healthcare accessibility among tribal populations within the state and pinpoint critical areas requiring immediate attention. Health indicators were selected based on the tribal perspective and association of Sustainable Goal 3 (Good Health and Wellbeing) with other SDGs. Focused group discussions in which tribal people and tribal experts were done in order to finalize the indicators. Employing Principal Component Analysis, two essential indices were constructed: the Tribal Health Index (THI) and the Tribal Health Intervention Index (THII). Index values were calculated based on the district-wise secondary data for Jharkhand. The bivariate spatial association technique, Moran’s I was used to assess the spatial pattern of the variables to determine if there is any clustering (positive spatial autocorrelation) or dispersion (negative spatial autocorrelation) of values across Jharkhand. The results helped in facilitating targeting policy interventions in deprived areas of Jharkhand.

Keywords: tribal health, health spatial disparities, health status, Jharkhand

Procedia PDF Downloads 72
28458 Design, Implementation, and Evaluation of ALS-PBL Model in the EMI Classroom

Authors: Yen-Hui Lu

Abstract:

In the past two decades, in order to increase university visibility and internationalization, English as a medium of instruction (EMI) has become one of the main language policies in higher education institutions where English is not a dominant language. However, given the complex, discipline-embedded nature of academic communication, academic literacy does not come with students’ everyday language experience, and it is a challenge for all students. Particularly, to engage students in the effective learning process of discipline concepts in the EMI classrooms, teachers need to provide explicit academic language instruction to assist students in deep understanding of discipline concepts. To bridge the gap between academic language development and discipline learning in the EMI classrooms, the researcher incorporates academic language strategies and key elements of project-based learning (PBL) into an Academic Language Strategy driven PBL (ALS-PBL) model. With clear steps and strategies, the model helps EMI teachers to scaffold students’ academic language development in the EMI classrooms. ALS-PBL model includes three major stages: preparation, implementation, and assessment. First, in the preparation stage, ALS-PBL teachers need to identify learning goals for both content and language learning and to design PBL topics for investigation. Second, during the implementation stage, ALS-PBL teachers use the model as a guideline to create a lesson structure and class routine. There are five important elements in the implementation stage: (1) academic language preparation, (2) connecting background knowledge, (3) comprehensible input, (4) academic language reinforcement, and (5) sustained inquiry and project presentation. Finally, ALS-PBL teachers use formative assessments such as student learning logs, teachers’ feedback, and peer evaluation to collect detailed information that demonstrates students’ academic language development in the learning process. In this study, ALS-PBL model was implemented in an interdisciplinary course entitled “Science is Everywhere”, which was co-taught by five professors from different discipline backgrounds, English education, civil engineering, business administration, international business, and chemical engineering. The purpose of the course was to cultivate students’ interdisciplinary knowledge as well as English competency in disciplinary areas. This study used a case-study design to systematically investigate students’ learning experiences in the class using ALS-PBL model. The participants of the study were 22 college students with different majors. This course was one of the elective EMI courses in this focal university. The students enrolled in this EMI course to fulfill the school language policy, which requires the students to complete two EMI courses before their graduation. For the credibility, this study used multiple methods to collect data, including classroom observation, teachers’ feedback, peer assessment, student learning log, and student focus-group interviews. Research findings show four major successful aspects of implementing ALS-PBL model in the EMI classroom: (1) clear focus on both content and language learning, (2) meaningful practice in authentic communication, (3) reflective learning in academic language strategies, and (4) collaborative support in content knowledge.This study will be of value to teachers involved in delivering English as well as content lessons to language learners by providing a theoretically-sound practical model for application in the classroom.

Keywords: academic language development, content and language integrated learning, english as a medium of instruction, project-based learning

Procedia PDF Downloads 68
28457 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada

Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath

Abstract:

Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.

Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social

Procedia PDF Downloads 111
28456 A Crossover Study of Therapeutic Equivalence of Generic Product Versus Reference Product of Ivabradine in Patients with Chronic Heart Failure

Authors: Hadeer E. Eliwa, Naglaa S. Bazan, Ebtissam A. Darweesh, Nagwa A. Sabri

Abstract:

Background: Generic substitution of brand ivabradine prescriptions can reduce drug expenditures and improve adherence. However, the distrust of generic medicines by practitioners and patients due to doubts regarding their quality and fear of counterfeiting compromise the acceptance of this practice. Aim: The goal of this study is to compare the therapeutic equivalence of brand product versus the generic product of ivabradine in adult patients with chronic heart failure with reduced ejection fraction (≤ 40%) (HFrEF). Methodology: Thirty-two Egyptian patients with chronic heart failure with reduced ejection fraction (HFrEF) were treated with branded ivabradine (Procrolan ©) and generic (Bradipect ©) during 24 (2x12) weeks. Primary outcomes were resting heart rate (HR), NYHA FC, Quality of life (QoL) using Minnesota Living with Heart Failure (MLWHF) and EF. Secondary outcomes were the number of hospitalizations for worsening HFrEF and adverse effects. The washout period was not allowed. Findings: At the 12th week, the reduction in HR was comparable in the two groups (90.13±7.11 to 69±11.41 vs 96.13±17.58 to 67.31±8.68 bpm in brand and generic groups, respectively). Also, the increase in EF was comparable in the two groups (27.44 ±4.59 to 33.38±5.62 vs 32±5.96 to 39.31±8.95 in brand and generic groups, respectively). The improvement in NYHA FC was comparable in both groups (87.5% in brand group vs 93.8% in the generic group). The mean value of the QOL improved from 31.63±15.8 to 19.6±14.7 vs 35.68±17.63 to 22.9±15.1 for the brand and generic groups, respectively. Similarly, at end of 24 weeks, no significant changes were observed from data observed at 12th week regarding HR, EF, QoL and NYHA FC. Only minor side effects, mainly phosphenes, and a comparable number of hospitalizations were observed in both groups. Conclusion: The study revealed no statistically significant differences in the therapeutic effect and safety between generic and branded ivabradine. We assume that practitioners can safely interchange between them for economic reasons.

Keywords: bradipect©, heart failure, ivabradine, Procrolan ©, therapeutic equivalence

Procedia PDF Downloads 448
28455 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods

Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana

Abstract:

Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.

Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management

Procedia PDF Downloads 178
28454 Employing GIS to Analyze Areas Prone to Flooding: Case Study of Thailand

Authors: Sanpachai Huvanandana, Settapong Malisuwan, Soparwan Tongyuak, Prust Pannachet, Anong Phoepueak, Navneet Madan

Abstract:

Many regions of Thailand are prone to flooding due to tropical climate. A commonly increasing precipitation in this continent results in risk of flooding. Many efforts have been implemented such as drainage control system, multiple dams, and irrigation canals. In order to decide where the drainages, dams, and canal should be appropriately located, the flooding risk area should be determined. This paper is aimed to identify the appropriate features that can be used to classify the flooding risk area in Thailand. Several features have been analyzed and used to classify the area. Non-supervised clustering techniques have been used and the results have been compared with ten years average actual flooding area.

Keywords: flood area clustering, geographical information system, flood features

Procedia PDF Downloads 276
28453 Comparative Stem Cells Therapy for Regeneration of Liver Fibrosis

Authors: H. M. Imam, H. M. Rezk, A. F. Tohamy

Abstract:

Background: Human umbilical cord blood (HUCB) is considered as a unique source for stem cells. HUCB contain different types of progenitor cells which could differentiate into hepatocytes. Aims: To investigate the potential of rat's liver damage repair using human umbilical cord mesenchymal stem cells (hUCMSCs). We investigated the feasibility for hUCMSCs in recovery from liver damage. Moreover, investigating fibrotic liver repair and using the CCl4-induced model for liver damage in the rat. Methods: Rats were injected with 0.5 ml/kg CCl4 to induce liver damage and progressive liver fibrosis. hUCMSCs were injected into the rats through the tail vein; Stem cells were transplanted at a dose of 1×106 cells/rat after 72 hours of CCl4 injection without receiving any immunosuppressant. After (6 and 8 weeks) of transplantation, blood samples were collected to assess liver functions (ALT, AST, GGT and ALB) and level of Procollagen III as a liver fibrosis marker. In addition, hepatic tissue regeneration was assessed histopathologically and immunohistochemically using antihuman monoclonal antibodies against CD34, CK19 and albumin. Results: Biochemical and histopathological analysis showed significantly increased recovery from liver damage in the transplanted group. In addition, HUCB stem cells transdifferentiated into functional hepatocytes in rats with hepatic injury which results in improving liver structure and function. Conclusion: Our findings suggest that transplantation of hUCMSCs may be a novel therapeutic approach for treating liver fibrosis. Therefore, hUCMSCs are a potential option for treatment of liver cirrhosis.

Keywords: carbon tetra chloride, liver fibrosis, mesenchymal stem cells, rat

Procedia PDF Downloads 324
28452 Suitability of the Sport Motivation Scale–II for Use in Jr. High Physical Education: A Confirmatory Factor Analysis

Authors: Keven A. Prusak, William F. Christensen, Zack Beddoes

Abstract:

Background: For more than a decade, the Sport Motivation Scale (SMS) has been used to measure contextual motivation across a variety of sporting and physical education (PE) settings but not without criticism as to its tenability. Consequently, a new version of the sport motivation scale (SMS-II) was created to address certain weakness of the original SMS. Purpose: The purpose of this study is to assess the suitability of the SMS-II in the secondary PE setting. Methods: Three hundred and twenty (204 females, and 116 males; grades 7-9) completed the 18-item, six-subscale SMS-II at the end of a required PE class. Factor means, standard deviations, and correlations were calculated and further examined via confirmatory factor analysis (CFA). Model parameters were estimated maximum likelihood function. Results: Results indicate that participants held generally positive perceptions toward PE as a context (more so for males than females). Reliability analysis yielded adequate alphas (rα = 0.71 to 0.87, Mα = 0.78) with the exception of the AM subscale (αAM = .64). Correlation analysis indicated some support for the SIMPLEX pattern, but distal ends of the motivation continuum displayed no relationship. CFA yielded robust fit indices to the proposed structure of the SMS-II for PE. A small but significant variance across genders was noted and discussed. Conclusions: In all, the SMS-II suitably accesses PE context-specific motivational indices within Jr. High PE.

Keywords: motivation, self-determination theory, physical education, confirmatory factor analysis

Procedia PDF Downloads 315
28451 Strength Analysis of RCC Dams Subject to the Layer-by-Layer Construction Method

Authors: Archil Motsonelidze, Vitaly Dvalishvili

Abstract:

Existing roller compacted concrete (RCC) dams indicate that the layer-by-layer construction method gives considerable economies as compared with the conventional methods. RCC dams have also gained acceptance in the regions of high seismic activity. Earthquake resistance analysis of RCC gravity dams based on nonlinear finite element technique is presented. An elastic-plastic approach is used to describe the material of a dam while it is under static conditions (period of construction). Seismic force, as an acceleration equivalent to that produced by a real earthquake, is supposed to act when the dam is completed. The materials of the dam and foundation may be nonhomogeneous and anisotropic. The “dam-foundation” system is idealized as a plain strain problem.

Keywords: finite element method, layer-by-layer construction, RCC dams, strength analysis

Procedia PDF Downloads 535
28450 A Systematic Review and Meta-Analysis of Diabetes Ketoacidosis in Ethiopia

Authors: Addisu Tadesse Sahile, Mussie Wubshet Teka, Solomon Muluken Ayehu

Abstract:

Background: Diabetes is one of the common public health problems of the century that was estimated to affect one in a tenth of the world population by the year 2030, where diabetes ketoacidosis is one of its common acute complications. Objectives: The aim of this review was to assess the magnitude of diabetes ketoacidosis among patients with type 1 diabetes in Ethiopia. Methods: A systematic data search was done across Google Scholar, PubMed, Web of Science, and African Online Journals. Two reviewers carried out the selection, reviewing, screening, and extraction of the data independently by using a Microsoft Excel Spreadsheet. The Joanna Briggs Institute's prevalence critical appraisal tool was used to assess the quality of evidence. All studies conducted in Ethiopia that reported diabetes ketoacidosis rates among type 1 diabetes were included. The extracted data was imported into the comprehensive meta-analysis version 3.0 for further analysis. Heterogeneity was checked by Higgins’s method, whereas the publication bias was checked by using Beggs and Eggers’s tests. A random-effects meta-analysis model with a 95% confidence interval was computed to estimate the pooled prevalence. Furthermore, subgroup analysis based on the study area (Region) and the sample size was carried out. Result and Conclusion: After review made across a total of 51 articles, of which 12 articles fulfilled the inclusion criteria and were included in the meta-analysis. The pooled prevalence of diabetes ketoacidosis among type 1 diabetes in Ethiopia was 53.2% (95%CI: 43.1%-63.1%). The highest prevalence of DKA was reported in the Tigray region of Ethiopia, whereas the lowest was reported in the Southern region of Ethiopia. Concerned bodies were suggested to work on the escalated burden of diabetes ketoacidosis in Ethiopia.

Keywords: DKA, Type 1 diabetes, Ethiopia, systematic review, meta-analysis

Procedia PDF Downloads 39
28449 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing

Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev

Abstract:

A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.

Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation

Procedia PDF Downloads 29
28448 Failure Analysis of a Medium Duty Vehicle Leaf Spring

Authors: Gül Çevik

Abstract:

This paper summarizes the work conducted to assess the root cause of the failure of a medium commercial vehicle leaf spring failed in service. Macro- and micro-fractographic analyses by scanning electron microscope as well as material verification tests were conducted in order to understand the failure mechanisms and root cause of the failure. Findings from the fractographic analyses indicated that failure mechanism is fatigue. Crack initiation was identified to have occurred from a point on the top surface near to the front face and to the left side. Two other crack initiation points were also observed, however, these cracks did not propagate. The propagation mode of the fatigue crack revealed that the cyclic loads resulting in crack initiation and propagation were unidirectional bending. Fractographic analyses have also showed that the root cause of the fatigue crack initiation and propagation was loading the part above design stress. Material properties of the part were also verified by chemical composition analysis, microstructural analysis by optical microscopy and hardness tests.

Keywords: leaf spring, failure analysis, fatigue, fractography

Procedia PDF Downloads 121
28447 The Main Steamline Break Transient Analysis for Advanced Boiling Water Reactor Using TRACE, PARCS, and SNAP Codes

Authors: H. C. Chang, J. R. Wang, A. L. Ho, S. W. Chen, J. H. Yang, C. Shih, L. C. Wang

Abstract:

To confirm the reactor and containment integrity of the Advanced Boiling Water Reactor (ABWR), we perform the analysis of main steamline break (MSLB) transient by using the TRACE, PARCS, and SNAP codes. The process of the research has four steps. First, the ABWR nuclear power plant (NPP) model is developed by using the above codes. Second, the steady state analysis is performed by using this model. Third, the ABWR model is used to run the analysis of MSLB transient. Fourth, the predictions of TRACE and PARCS are compared with the data of FSAR. The results of TRACE/PARCS and FSAR are similar. According to the TRACE/PARCS results, the reactor and containment integrity of ABWR can be maintained in a safe condition for MSLB.

Keywords: advanced boiling water reactor, TRACE, PARCS, SNAP

Procedia PDF Downloads 197
28446 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan

Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov

Abstract:

Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.

Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality

Procedia PDF Downloads 94