Search results for: smooth baseline hazard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2034

Search results for: smooth baseline hazard

1944 Strong Convergence of an Iterative Sequence in Real Banach Spaces with Kadec Klee Property

Authors: Umar Yusuf Batsari

Abstract:

Let E be a uniformly smooth and uniformly convex real Banach space and C be a nonempty, closed and convex subset of E. Let $V= \{S_i : C\to C, ~i=1, 2, 3\cdots N\}$ be a convex set of relatively nonexpansive mappings containing identity. In this paper, an iterative sequence obtained from CQ algorithm was shown to have strongly converge to a point $\hat{x}$ which is a common fixed point of relatively nonexpansive mappings in V and also solve the system of equilibrium problems in E. The result improve some existing results in the literature.

Keywords: relatively nonexpansive mappings, strong convergence, equilibrium problems, uniformly smooth space, uniformly convex space, convex set, kadec klee property

Procedia PDF Downloads 394
1943 A Hazard Rate Function for the Time of Ruin

Authors: Sule Sahin, Basak Bulut Karageyik

Abstract:

This paper introduces a hazard rate function for the time of ruin to calculate the conditional probability of ruin for very small intervals. We call this function the force of ruin (FoR). We obtain the expected time of ruin and conditional expected time of ruin from the exact finite time ruin probability with exponential claim amounts. Then we introduce the FoR which gives the conditional probability of ruin and the condition is that ruin has not occurred at time t. We analyse the behavior of the FoR function for different initial surpluses over a specific time interval. We also obtain FoR under the excess of loss reinsurance arrangement and examine the effect of reinsurance on the FoR.

Keywords: conditional time of ruin, finite time ruin probability, force of ruin, reinsurance

Procedia PDF Downloads 360
1942 Improvement of the Aerodynamic Behaviour of a Land Rover Discovery 4 in Turbulent Flow Using Computational Fluid Dynamics (CFD)

Authors: Ahmed Al-Saadi, Ali Hassanpour, Tariq Mahmud

Abstract:

The main objective of this study is to investigate ways to reduce the aerodynamic drag coefficient and to increase the stability of the full-size Sport Utility Vehicle using three-dimensional Computational Fluid Dynamics (CFD) simulation. The baseline model in the simulation was the Land Rover Discovery 4. Many aerodynamic devices and external design modifications were used in this study. These reduction aerodynamic techniques were tested individually or in combination to get the best design. All new models have the same capacity and comfort of the baseline model. Uniform freestream velocity of the air at inlet ranging from 28 m/s to 40 m/s was used. ANSYS Fluent software (version 16.0) was used to simulate all models. The drag coefficient obtained from the ANSYS Fluent for the baseline model was validated with experimental data. It is found that the use of modern aerodynamic add-on devices and modifications has a significant effect in reducing the aerodynamic drag coefficient.

Keywords: aerodynamics, RANS, sport utility vehicle, turbulent flow

Procedia PDF Downloads 284
1941 Effect of Kinesio Taping on Anaerobic Power and Maximum Oxygen Consumption after Eccentric Exercise

Authors: Disaphon Boobpachat, Nuttaset Manimmanakorn, Apiwan Manimmanakorn, Worrawut Thuwakum, Michael J. Hamlin

Abstract:

Objectives: To evaluate effect of kinesio tape compared to placebo tape and static stretching on recovery of anaerobic power and maximal oxygen uptake (Vo₂max) after intensive exercise. Methods: Thirty nine untrained healthy volunteers were randomized to 3 groups for each intervention: elastic tape, placebo tape and stretching. The participants performed intensive exercise on the dominant quadriceps by using isokinetic dynamometry machine. The recovery process was evaluated by creatine kinase (CK), pressure pain threshold (PPT), muscle soreness scale (MSS), maximum voluntary contraction (MVC), jump height, anaerobic power and Vo₂max at baseline, immediately post-exercise and post-exercise day 1, 2, 3 and 7. Results: The kinesio tape, placebo tape and stretching groups had significant changes of PPT, MVC, jump height at immediately post-exercise compared to baseline (p < 0.05), and changes of MSS, CK, anaerobic power and Vo₂max at day 1 post-exercise compared to baseline (p < 0.05). There was no significant difference of those outcomes among three groups. Additionally, all experimental groups had little effects on anaerobic power and Vo₂max compared to baseline and compared among three groups (p > 0.05). Conclusion: Kinesio tape and stretching did not improve recovery of anaerobic power and Vo₂max after eccentric exercise compared to placebo tape.

Keywords: stretching, eccentric exercise, Wingate test, muscle soreness

Procedia PDF Downloads 108
1940 Comparing Trastuzumab-Related Cardiotoxicity between Elderly and Younger Patients with Breast Cancer: A Prospective Cohort Study

Authors: Afrah Aladwani, Alexander Mullen, Mohammad AlRashidi, Omamah Alfarisi, Faisal Alterkit, Abdulwahab Aladwani, Asit Kumar, Emad Eldosouky

Abstract:

Introduction: Trastuzumab is a HER-2 targeted humanized monoclonal antibody that significantly improves the therapeutic outcomes of metastatic and non-metastatic breast cancer. However, it is associated with increased risk of cardiotoxicity that ranges from mild decline in the cardiac ejection fraction to permanent cardiomyopathy. Concerns have been raised in treating eligible older patients. This study compares trastuzumab outcomes between two age cohorts in the Kuwait Cancer Control Centre (KCCC). Methods: In a prospective comparative observational study, 93 HER-2 positive breast cancer patients undergoing different chemotherapy protocols + trastuzumab were included and divided into two cohorts based on their age (˂60 and ≥60 years old). The baseline left ventricular ejection fraction (LVEF) was assessed and monitored every three months during trastuzumab treatment. Event of cardiotoxicity was defined as ≥10% decline in the LVEF from the baseline. The lower accepted normal limit of the LVEF was 50%. Results: The median baseline LVEF was 65% in both age cohorts (IQR 8% and 9% for older and younger patients respectively). Whereas, the median LVEF post-trastuzumab treatment was 51% and 55% in older and younger patients respectively (IQR 8%; p-value = 0.22), despite the fact that older patients had significantly lower exposure to anthracyclines compared to younger patients (60% and 84.1% respectively; p-value ˂0.001). 86.7% and 55.6% of older and younger patients, respectively, developed ≥10% decline in their LVEF from the baseline. Among those, only 29% of older and 27% of younger patients reached a LVEF value below 50% (p-value = 0.88). Statistically, age was the only factor that significantly correlated with trastuzumab induced cardiotoxicity (OR 4; p-value ˂0.012), but it did not increase the requirement for permanent discontinuation of treatment. A baseline LVEF value below 60% contributed to developing a post-treatment value below normal ranges (50%). Conclusion: Breast cancer patients aged 60 years and above in Kuwait were at 4-fold higher risk of developing ≥10% decline in their LVEF from the baseline than younger patients during trastuzumab treatment. Surprisingly, previous exposure to anthracyclines and multiple comorbidities were not associated with significant increased risk of cardiotoxicity.

Keywords: breast cancer, elderly, Trastuzumab, cardiotoxicity

Procedia PDF Downloads 181
1939 Efficacy of In-Situ Surgical vs. Needle Revision on Late Failed Trabeculectomy Blebs

Authors: Xie Xiaobin, Zhang Yan, Shi Yipeng, Sun Wenying, Chen Shuang, Cai Zhipeng, Zhang Hong, Zhang Lixia, Xie Like

Abstract:

Objective: The objective of this research is to compare the efficacy of the late in-situ surgical revision augmented with continuous infusion and needle revision on failed trabeculectomy blebs. Methods From December 2018 to December 2021, a prospective randomized controlled trial was performed on 44 glaucoma patients with failed bleb ≥ 6months with medically uncontrolled in Eye Hospital, China Academy of Chinese Medical Sciences. They were randomly divided into two groups. 22 eyes of 22 patients underwent the late in-situ surgical revision with continuous anterior chamber infusion in the study group, and 22 of 22 patients were treated with needle revision in the control group. Main outcome measures include preoperative and postoperative intraocular pressure (IOP), the number of anti-glaucoma medicines, the operation success rate, and the postoperative complications. Results The postoperative IOP values decreased significantly from the baseline in both groups (both P<0.05). IOP was significantly lower in the study group than in the control group at one week, 1, and 3 months postoperatively (all P<0.05). IOP reductions in the study group were substantially more prominent than in the control group at all postoperative time points (all P<0.05). The complete success rate in the study group was significantly higher than in the control group (71.4% vs. 33.3%, P<0.05), while the complete failure rate was significantly lower in the study group (0% vs. 28.5%, P<0.05). According to Cox’s proportional hazards regression analysis, high IOP at baseline was independently associated with increased risks of complete failure (adjusted hazard ratio=1.141, 95% confidence interval=1.021-1.276, P<0.05). There was no significant difference in the incidence of postoperative complications between the two groups (P>0.05). Conclusion: Both in-situ surgical and needle revision have acceptable success rates and safety for the late failed trabeculectomy blebs, while the former is likely to have a higher level of efficacy over the latter. Needle revision may be insufficient for eyes with low target IOP.

Keywords: glaucoma, trabeculectomy blebs, in-situ surgical revision, needle revision

Procedia PDF Downloads 65
1938 Serum 25-Hydroxyvitamin D Levels and Depression in Persons with Human Immunodeficiency Virus Infection: A Cross-Sectional and Prospective Study

Authors: Kalpana Poudel-Tandukar

Abstract:

Background: Human Immunodeficiency Virus (HIV) infection has been frequently associated with vitamin D deficiency and depression. Vitamin D deficiency increases the risk of depression in people without HIV. We assessed the cross-sectional and prospective associations between serum concentrations of 25-hydroxyvitamin D (25[OH]D) and depression in a HIV-positive people. Methods: A survey was conducted among 316 HIV-positive people aged 20-60 years residing in Kathmandu, Nepal for a cross-sectional association at baseline, and among 184 participants without depressive symptoms at baseline who responded to both baseline (2010) and follow-up (2011) surveys for prospective association. The competitive protein-binding assay was used to measure 25(OH)D levels and the Beck Depression Inventory-Ia method was used to measure depression, with cut off score 20 or higher. Relationships were assessed using multiple logistic regression analysis with adjustment of potential confounders. Results: The proportion of participants with 25(OH)D level of <20ng/mL, 20-30ng/mL, and >30ng/mL were 83.2%, 15.5%, and 1.3%, respectively. Only four participants with 25(OH)D level of >30ng/mL were excluded in the further analysis. The mean 25(OH)D level in men and women were 15.0ng/mL and 14.4ng/mL, respectively. Twenty six percent of participants (men:23%; women:29%) were depressed. Participants with 25(OH)D level of < 20 ng/mL had a 1.4 fold higher odds of depression in a cross-sectional and 1.3 fold higher odds of depression after 18 months of baseline compared to those with 25(OH)D level of 20-30ng/mL (p=0.40 and p=0.78, respectively). Conclusion: Vitamin D may not have significant impact against depression among HIV-positive people with 25(OH)D level below normal ( > 30ng/mL).

Keywords: depression, HIV, Nepal, vitamin D

Procedia PDF Downloads 305
1937 Schema Therapy as Treatment for Adults with Autism Spectrum Disorder and Comorbid Personality Disorder: A Multiple Baseline Case Series Study Testing Cognitive-Behavioral and Experiential Interventions

Authors: Richard Vuijk, Arnoud Arntz

Abstract:

Rationale: To our knowledge treatment of personality disorder comorbidity in adults with autism spectrum disorder (ASD) is understudied and is still in its infancy: We do not know if treatment of personality disorders may be applicable to adults with ASD. In particular, it is unknown whether patients with ASD benefit from experiential techniques that are part of schema therapy developed for the treatment of personality disorders. Objective: The aim of the study is to investigate the efficacy of a schema mode focused treatment with adult clients with ASD and comorbid personality pathology (i.e. at least one personality disorder). Specifically, we investigate if they can benefit from both cognitive-behavioral, and experiential interventions. Study design: A multiple baseline case series study. Study population: Adult individuals (age > 21 years) with ASD and at least one personality disorder. Participants will be recruited from Sarr expertise center for autism in Rotterdam. The study requires 12 participants. Intervention: The treatment protocol consists of 35 weekly offered sessions, followed by 10 monthly booster sessions. A multiple baseline design will be used with baseline varying from 5 to 10 weeks, with weekly supportive sessions. After baseline, a 5-week exploration phase follows with weekly sessions during which current and past functioning, psychological symptoms, schema modes are explored, and information about the treatment will be given. Then 15 weekly sessions with cognitive-behavioral interventions and 15 weekly sessions with experiential interventions will be given. Finally, there will be a 10-month follow-up phase with monthly booster sessions. Participants are randomly assigned to baseline length, and respond weekly during treatment and monthly at follow-up on Belief Strength of negative core beliefs (by VAS), and fill out SMI, SCL-90 and SRS-A 7 times during screening procedure (i.e. before baseline), after baseline, after exploration, after cognitive and behavioral interventions, after experiential interventions, and after 5- and 10- month follow-up. The SCID-II will be administered during screening procedure (i.e. before baseline), at 5- and at 10-month follow-up. Main study parameters: The primary study parameter is negative core beliefs. Secondary study parameters include schema modes, personality disorder manifestations, psychological symptoms, and social interaction and communication. Discussion: To the best of author’s knowledge so far no study has been published on the application of schema mode focused interventions in adult patients with ASD and comorbid PD(s). This study offers the first systematic test of application of schema therapy for adults with ASD. The results of this study will provide initial evidence for the effectiveness of schema therapy in treating adults with both ASD and PD(s). The study intends to provide valuable information for future development and implementation of therapeutic interventions for adults with both ASD and PD(s).

Keywords: adults, autism spectrum disorder, personality disorder, schema therapy

Procedia PDF Downloads 205
1936 Effects of Oral L-Carnitine on Liver Functions after Trans arterial Chemoembolization in Hepatocellular Carcinoma Patients

Authors: Ali Kassem, Aly Taha, Abeer Hassan, Kazuhide Higuchi

Abstract:

Introduction: Trans arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) is usually followed by hepatic dysfunction that limits its efficacy. L-carnitine is recently studied as hepatoprotective agent. Our aim is to evaluate the L-carnitine effects against the deterioration of liver functions after TACE. Method: 53 patients with intermediate stage HCC were assigned into two groups; L-carnitine group (26 patients) who received L-carnitine 300 mg tablet twice daily from 2 weeks before to 12 weeks after TACE and control group (27 patients) without L-carnitine therapy. 28 of studied patients received branched chain amino acids granules. Results: There were significant differences between L-carnitine Vs. control group in mean serum albumin change from baseline to 1 week and 4 weeks after TACE (p < 0.05). L-Carnitine maintained Child-Pugh score at 1 week after TACE and exhibited improvement at 4 weeks after TACE (p < 0.01 Vs 1 week after TACE). Control group has significant Child-Pugh score deterioration from baseline to 1 week after TACE (p < 0.05) and 12 weeks after TACE (p < 0.05). There were significant differences between L-carnitine and control groups in mean Child-Pugh score change from baseline to 4 weeks (p < 0.05) and 12 weeks after TACE (p < 0.05). L-carnitine displayed improvement in (PT) from baseline to 1 week, 4 w (p < 0.05) and 12 weeks after TACE. PT in control group declined less than baseline along all follow up intervals. Total bilirubin in L-carnitine group decreased at 1 week post TACE while in control group, it significantly increased at 1 week (p = 0.01). ALT and C-reactive protein elevation were suppressed at 1 week after TACE in Lcarnitine group. The hepatoprotective effects of L-carnitine were enhanced by concomitant use of branched chain amino acids. Conclusion: L-carnitine and BCAA combination therapy offer a novel supportive strategy after TACE in HCC patients.

Keywords: hepatocellular carcinoma, L-carnitine, liver functions , trans-arterial embolization

Procedia PDF Downloads 121
1935 Dietary Modification and Its Effects in Overweight or Obese Saudi Women with or without Type 2 Diabetes Mellitus

Authors: Nasiruddin Khan, Nasser M. Al-Daghri, Dara A. Al-Disi, Asim Al-Fadda, Mohamed Al-Seif, Gyanendra Tripathi, A. L. Harte, Philip G. Mcternan

Abstract:

For the last few decades, the prevalence of type 2 diabetes mellitus (T2DM) in the Kingdom of Saudi Arabia (KSA) is increasing alarmingly high and is unprecedented at 31.6 %. Preventive measures should be taken to curb down the increasing incidence. In this prospective, 3-month study, we aimed to determine whether dietary modification program would confer favorable affects among overweight and obese adult Saudi women with or without T2DM. A total of 92 Saudi women [18 healthy controls, 24 overweight subjects and 50 overweight or obese patients with early onset T2DM were included in this prospective study. Baseline anthropometrics and fasting blood samples were taken at baseline and after 3 months. Fasting blood sugar and lipid profile were measured routinely. A 500 Kcal deficit energy diet less than their daily recommended dietary allowances were prescribed to all participants. After 3 months of follow-up visit, significant improvements were observed in both the overweight and DMT2 group as compared to baseline with decreased mean BMI [Overweight Group 28.54±1.49 versus 27.95±2.25, p<0.05; DMT2 group 35.24±7.67 versus 35.04±8.07, p<0.05] and hip circumference [Overweight group 109.67±5.01 versus 108.07±4.07, p<0.05; DMT2 group 112.3±13.43 versus 109.21±12.71, p<0.01]. Moreover, in the overweight group, baseline HDL-cholesterol was significantly associated with protein intake and inversely associated with carbohydrate intake in controls. In the DMT2 group, carbohydrate intake at baseline was significantly associated with BMI. A 3-month 500kcal/day deficit dietary modification alone is probably effective among adult overweight or obese Saudi females without or with T2DM. Longer prospective studies are to determine whether the dietary intervention alone can reduce progression of T2DM among high-risk adult Arabs.

Keywords: diet, lipid, obesity, T2DM

Procedia PDF Downloads 448
1934 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 28
1933 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 236
1932 Seismotectonics of Southern Haiti: A Faulting Model for the 12 January 2010 M7 Earthquake

Authors: Newdeskarl Saint Fleur, Nathalie Feuillet, Raphaël Grandin, Éric Jacques, Jennifer Weil-Accardo, Yann Klinger

Abstract:

The prevailing consensus is that the 2010 Mw7.0 Haiti earthquake left the Enriquillo–Plantain Garden strike-slip Fault (EPGF) unruptured but broke unmapped blind north-dipping thrusts. Using high-resolution topography, aerial images, bathymetry and geology we identified previously unrecognized south-dipping NW-SE-striking active thrusts in southern Haiti. One of them, Lamentin thrust (LT), cuts across the crowded city of Carrefour, extends offshore into Port-au-Prince Bay and connects at depth with the EPGF. We propose that both faults broke in 2010. The rupture likely initiated on the thrust and propagated further along the EPGF due to unclamping. This scenario is consistent with geodetic, seismological and field data. The 2010 earthquake increased the stress toward failure on the unruptured segments of the EPGF and on neighboring thrusts, significantly increasing the seismic hazard in the Port-au-Prince urban area. The numerous active thrusts recognized in that area must be considered for future evaluation of the seismic hazard.

Keywords: active faulting, enriquillo-plantain garden fault, Haiti earthquake, seismic hazard

Procedia PDF Downloads 1206
1931 Process Safety Evaluation of a Nuclear Power Plant through Virtual Process Hazard Analysis (PHA) using the What-If Technique

Authors: Lormaine Anne Branzuela, Elysa Largo, Julie Marisol Pagalilauan, Neil Concibido, Monet Concepcion Detras

Abstract:

Energy is a necessity both for the people and the country. The demand for energy is continually increasing, but the supply is not doing the same. The reopening of the Bataan Nuclear Power Plant (BNPP) in the Philippines has been circulating in the media for the current time. The general public has been hesitant in accepting the inclusion of nuclear energy in the Philippine energy mix due to perceived unsafe conditions of the plant. This study evaluated the possible operations of a nuclear power plant, which is of the same type as the BNPP, considering the safety of the workers, the public, and the environment using a Process Hazard Analysis (PHA) method. What-If Technique was utilized to identify the hazards and consequences on the operations of the plant, together with the level of risk it entails. Through the brainstorming sessions of the PHA team, it was found that the most critical system on the plant is the primary system. Possible leakages on pipes and equipment due to weakened seals and welds and blockages on coolant path due to fouling were the most common scenarios identified, which further caused the most critical scenario – radioactive leak through sump contamination, nuclear meltdown, and equipment damage and explosion which could result to multiple injuries and fatalities, and environmental impacts.

Keywords: process safety management, process hazard analysis, what-If technique, nuclear power plant

Procedia PDF Downloads 179
1930 Study of Natural Radioactive and Radiation Hazard Index of Soil from Sembrong Catchment Area, Johor, Malaysia

Authors: M. I. A. Adziz, J. Sharib Sarip, M. T. Ishak, D. N. A. Tugi

Abstract:

Radiation exposure to humans and the environment is caused by natural radioactive material sources. Given that exposure to people and communities can occur through several pathways, it is necessary to pay attention to the increase in naturally radioactive material, particularly in the soil. Continuous research and monitoring on the distribution and determination of these natural radionuclides' activity as a guide and reference are beneficial, especially in an accidental exposure. Surface soil/sediment samples from several locations identified around the Sembrong catchment area were taken for the study. After 30 days of secular equilibrium with their daughters, the activity concentrations of the naturally occurring radioactive material (NORM) members, i.e. ²²⁶Ra, ²²⁸Ra, ²³⁸U, ²³²Th, and ⁴⁰K, were measured using high purity germanium (HPGe) gamma spectrometer. The results obtained showed that the radioactivity concentration of ²³⁸U ranged between 17.13 - 30.13 Bq/kg, ²³²Th ranged between 22.90 - 40.05 Bq/kg, ²²⁶Ra ranged between 19.19 - 32.10 Bq/kg, ²²⁸Ra ranged between 21.08 - 39.11 Bq/kg and ⁴⁰K ranged between 9.22 - 51.07 Bq/kg with average values of 20.98 Bq/kg, 27.39 Bq/kg, 23.55 Bq/kg, 26.93 Bq/kg and 23.55 Bq/kg respectively. The values obtained from this study were low or equivalent to previously reported in previous studies. It was also found that the mean/mean values obtained for the four parameters of the Radiation Hazard Index, namely radium equivalent activity (Raeq), external dose rate (D), annual effective dose and external hazard index (Hₑₓ), were 65.40 Bq/kg, 29.33 nGy/h, 19.18 ¹⁰⁻⁶Sv and 0.19 respectively. These obtained values are low compared to the world average values and the values of globally applied standards. Comparison with previous studies (dry season) also found that the values for all four parameters were low and equivalent. This indicates the level of radiation hazard in the area around the study is safe for the public.

Keywords: catchment area, gamma spectrometry, naturally occurring radioactive material (NORM), soil

Procedia PDF Downloads 71
1929 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 331
1928 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 176
1927 Periodically Forced Oscillator with Noisy Chaotic Dynamics

Authors: Adedayo Oke Adelakun

Abstract:

The chaotic dynamics of periodically forced oscillators with smooth potential has been extensively investigated via theoretical, numerical and experimental simulations. With the advent of the study of chaotic dynamics by means of method of multiple time scale analysis, Melnikov theory, bifurcation diagram, Poincare's map, bifurcation diagrams and Lyapunov exponents, it has become necessary to seek for a better understanding of nonlinear oscillator with noisy term. In this paper, we examine the influence of noise on complex dynamical behaviour of periodically forced F6 - Duffing oscillator for specific choice of noisy parameters. The inclusion of noisy term improves the dynamical behaviour of the oscillator which may have wider application in secure communication than smooth potential.

Keywords: hierarchical structure, periodically forced oscillator, noisy parameters, dynamical behaviour, F6 - duffing oscillator

Procedia PDF Downloads 300
1926 Efficacy of Heart Failure Reversal Treatment Followed by 90 Days Follow up in Chronic Heart Failure Patients with Low Ejection Fraction

Authors: Rohit Sane, Snehal Dongre, Pravin Ghadigaonkar, Rahul Mandole

Abstract:

The present study was designed to evaluate efficacy of heart failure reversal therapy (HFRT) that uses herbal procedure (panchakarma) and allied therapies, in chronic heart failure (CHF) patients with low ejection fraction. Methods: This efficacy study was conducted in CHF patients (aged: 25-65 years, ejection fraction (EF) < 30%) wherein HFRT (60-75 minutes) consisting of snehana (external oleation), swedana (passive heat therapy), hrudaydhara(concoction dripping treatment) and basti(enema) was administered twice daily for 7 days. During this therapy and next 30 days, patients followed the study dinarcharya and were prescribed ARJ kadha in addition to their conventional treatment. The primary endpoint of this study was evaluation of maximum aerobic capacity uptake (MAC) as assessed by 6-minute walk distance (6MWD) using Cahalins equation from baseline, at end of 7 day treatment, follow-up after 30 days and 90 days. EF was assessed by 2D Echo at baseline and after 30 days of follow-up. Results: CHF patients with < 30% EF (N=52, mean [SD] age: 58.8 [10.8], 85% men) were enrolled in the study. There was a 100% compliance to study therapy. A significant improvement was observed in MAC levels (7.11%, p =0.029), at end of 7 day therapy as compared to baseline. This improvement was maintained at two follow-up visits. Moreover, ejection fraction was observed to be increased by 6.38%, p=0,012 as compared to baseline at day 7 of the therapy. Conclusions: This 90 day follow up study highlights benefit of HFRT, as a part of maintenance treatment for CHF patients with reduced ejection fraction.

Keywords: chronic heart failure, functional capacity, heart failure reversal therapy, oxygen uptake, panchakarma

Procedia PDF Downloads 203
1925 Rosuvastatin Improves Endothelial Progenitor Cells in Rheumatoid Arthritis

Authors: Ashit Syngle, Nidhi Garg, Pawan Krishan

Abstract:

Background: Endothelial Progenitor Cells (EPCs) are depleted and contribute to increased cardiovascular (CV) risk in rheumatoid arthritis (RA). Statins exert a protective effect in CAD partly by promoting EPC mobilization. This vasculoprotective effect of statin has not yet been investigated in RA. We aimed to investigate the effect of rosuvastatin on EPCs in RA. Methods: 50 RA patients were randomized to receive 6 months of treatment with rosuvastatin (10 mg/day, n=25) and placebo (n=25) as an adjunct to existing stable antirheumatic drugs. EPCs (CD34+/CD133+) were quantified by Flow Cytometry. Inflammatory measures included DAS28, CRP and ESR were measured at baseline and after treatment. Lipids and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1) were estimated at baseline and after treatment. Results: At baseline, inflammatory measures and pro-inflammatory cytokines were elevated and EPCs depleted among both groups. At baseline, EPCs inversely correlated with DAS28 and TNF-α in both groups. EPCs increased significantly (p < 0.01) after treatment with rosuvastatin but did not show significant change with placebo. Rosuvastatin exerted positive effect on lipid spectrum: lowering total cholesterol, LDL, non HDL and elevation of HDL as compared with placebo. At 6 months, DAS28, ESR, CRP, TNF-α and IL-6 improved significantly in rosuvastatin group. Significant negative correlation was observed between EPCs and DAS28, CRP, TNF-α, and IL-6 after treatment with rosuvastatin. Conclusion: First study to show that rosuvastatin improves inflammation and EPC biology in RA possibly through its anti-inflammatory and lipid lowering effect. This beneficial effect of rosuvastatin may provide a novel strategy to prevent cardiovascular events in RA.

Keywords: RA, Endothelial Progenitor Cells, rosuvastatin, cytokines

Procedia PDF Downloads 232
1924 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 238
1923 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis

Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu

Abstract:

Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.

Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding

Procedia PDF Downloads 135
1922 PatchMix: Learning Transferable Semi-Supervised Representation by Predicting Patches

Authors: Arpit Rai

Abstract:

In this work, we propose PatchMix, a semi-supervised method for pre-training visual representations. PatchMix mixes patches of two images and then solves an auxiliary task of predicting the label of each patch in the mixed image. Our experiments on the CIFAR-10, 100 and the SVHN dataset show that the representations learned by this method encodes useful information for transfer to new tasks and outperform the baseline Residual Network encoders by on CIFAR 10 by 12% on ResNet 101 and 2% on ResNet-56, by 4% on CIFAR-100 on ResNet101 and by 6% on SVHN dataset on the ResNet-101 baseline model.

Keywords: self-supervised learning, representation learning, computer vision, generalization

Procedia PDF Downloads 59
1921 Numerical Study on the Effects of Truncated Ribs on Film Cooling with Ribbed Cross-Flow Coolant Channel

Authors: Qijiao He, Lin Ye

Abstract:

To evaluate the effect of the ribs on internal structure in film hole and the film cooling performance on outer surface, the numerical study investigates on the effects of rib configuration on the film cooling performance with ribbed cross-flow coolant channel. The base smooth case and three ribbed cases, including the continuous rib case and two cross-truncated rib cases with different arrangement, are studied. The distributions of adiabatic film cooling effectiveness and heat transfer coefficient are obtained under the blowing ratios with the value of 0.5 and 1.0, respectively. A commercial steady RANS (Reynolds-averaged Navier-Stokes) code with realizable k-ε turbulence model and enhanced wall treatment were performed for numerical simulations. The numerical model is validated against available experimental data. The two cross-truncated rib cases produce approximately identical cooling effectiveness compared with the smooth case under lower blowing ratio. The continuous rib case significantly outperforms the other cases. With the increase of blowing ratio, the cases with ribs are inferior to the smooth case, especially in the upstream region. The cross-truncated rib I case produces the highest cooling effectiveness among the studied the ribbed channel case. It is found that film cooling effectiveness deteriorates with the increase of spiral intensity of the cross-flow inside the film hole. Lower spiral intensity leads to a better film coverage and thus results in better cooling effectiveness. The distinct relative merits among the cases at different blowing ratios are explored based on the aforementioned dominant mechanism. With regard to the heat transfer coefficient, the smooth case has higher heat transfer intensity than the ribbed cases under the studied blowing ratios. The laterally-averaged heat transfer coefficient of the cross-truncated rib I case is higher than the cross-truncated rib II case.

Keywords: cross-flow, cross-truncated rib, film cooling, numerical simulation

Procedia PDF Downloads 110
1920 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 122
1919 Probabilistic-Based Design of Bridges under Multiple Hazards: Floods and Earthquakes

Authors: Kuo-Wei Liao, Jessica Gitomarsono

Abstract:

Bridge reliability against natural hazards such as floods or earthquakes is an interdisciplinary problem that involves a wide range of knowledge. Moreover, due to the global climate change, engineers have to design a structure against the multi-hazard threats. Currently, few of the practical design guideline has included such concept. The bridge foundation in Taiwan often does not have a uniform width. However, few of the researches have focused on safety evaluation of a bridge with a complex pier. Investigation of the scouring depth under such situation is very important. Thus, this study first focuses on investigating and improving the scour prediction formula for a bridge with complicated foundation via experiments and artificial intelligence. Secondly, a probabilistic design procedure is proposed using the established prediction formula for practical engineers under the multi-hazard attacks.

Keywords: bridge, reliability, multi-hazards, scour

Procedia PDF Downloads 344
1918 Measurement of 238U, 232Th and 40K in Soil Samples Collected from Coal City Dhanbad, India

Authors: Zubair Ahmad

Abstract:

Specific activities of the natural radionuclides 238U, 232Th and 40K were measured by using γ - ray spectrometric technique in soil samples collected from the city of Dhanbad, which is located near coal mines. Mean activity values for 238U, 232Th and 40K were found to be 60.29 Bq/kg, 64.50 Bq/kg and 481.0 Bq/kg, respectively. Mean radium equivalent activity, absorbed dose rate, outdoor dose, external hazard index, internal hazard index, for the area under study were determined as 189.53 Bq/kg, 87.21 nGy/h, 0.37 mSv/y, 0.52 and 0.64, respectively. The annual effective dose to the general public was found 0.44 mSv/y. This value lies well below the limit of 1 mSv/y as recommended by International Commission on Radiological Protection. Measured values were found safe for environment and public health.

Keywords: coal city Dhanbad, gamma-ray spectroscopy, natural radioactivity, soil samples

Procedia PDF Downloads 240
1917 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality

Authors: K. A. Adeleke

Abstract:

Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.

Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables

Procedia PDF Downloads 535
1916 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 43
1915 Genetic Algorithm Methods for Determination Over Flow Coefficient of Medium Throat Length Morning Glory Spillway Equipped Crest Vortex Breakers

Authors: Roozbeh Aghamajidi

Abstract:

Shaft spillways are circling spillways used generally for emptying unexpected floods on earth and concrete dams. There are different types of shaft spillways: Stepped and Smooth spillways. Stepped spillways pass more flow discharges through themselves in comparison to smooth spillways. Therefore, awareness of flow behavior of these spillways helps using them better and more efficiently. Moreover, using vortex breaker has great effect on passing flow through shaft spillway. In order to use more efficiently, the risk of flow pressure decreases to less than fluid vapor pressure, called cavitations, should be prevented as far as possible. At this research, it has been tried to study different behavior of spillway with different vortex shapes on spillway crest on flow. From the viewpoint of the effects of flow regime changes on spillway, changes of step dimensions, and the change of type of discharge will be studied effectively. Therefore, two spillway models with three different vortex breakers and three arrangements have been used to assess the hydraulic characteristics of flow. With regard to the inlet discharge to spillway, the parameters of pressure and flow velocity on spillway surface have been measured at several points and after each run. Using these kinds of information leads us to create better design criteria of spillway profile. To achieve these purposes, optimization has important role and genetic algorithm are utilized to study the emptying discharge. As a result, it turned out that the best type of spillway with maximum discharge coefficient is smooth spillway with ogee shapes as vortex breaker and 3 number as arrangement. Besides it has been concluded that the genetic algorithm can be used to optimize the results.

Keywords: shaft spillway, vortex breaker, flow, genetic algorithm

Procedia PDF Downloads 348