Search results for: Premature Failure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2487

Search results for: Premature Failure

1647 Static and Dynamic Tailings Dam Monitoring with Accelerometers

Authors: Cristiana Ortigão, Antonio Couto, Thiago Gabriel

Abstract:

In the wake of Samarco Fundão’s failure in 2015 followed by Vale’s Brumadinho disaster in 2019, the Brazilian National Mining Agency started a comprehensive dam safety programmed to rank dam safety risks and establish monitoring and analysis procedures. This paper focuses on the use of accelerometers for static and dynamic applications. Static applications may employ tiltmeters, as an example shown later in this paper. Dynamic monitoring of a structure with accelerometers yields its dynamic signature and this technique has also been successfully used in Brazil and this paper gives an example of tailings dam.

Keywords: instrumentation, dynamic, monitoring, tailings, dams, tiltmeters, automation

Procedia PDF Downloads 116
1646 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
1645 Reliability-Based Life-Cycle Cost Model for Engineering Systems

Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski

Abstract:

The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.

Keywords: initial cost, life-cycle cost, maintenance cost, reliability

Procedia PDF Downloads 581
1644 Magnitude of Meconium Stained Amniotic Fluid and Associated Factors among Women Who Gave Birth in North Shoa Zone Hospital’s Amhara Region Ethiopia 2022

Authors: Mitiku Tefera

Abstract:

Background: Meconium-stained amniotic fluid is one of the primary causes of birth asphyxia. Each year, over five million neonatal deaths occur worldwide due to meconium-stained amniotic fluid, with 90% of these deaths due to birth asphyxia. In Ethiopia meconium-stained amniotic fluid is under investigated, specifically in North Shoa Zone Amhara region Ethiopia. Objective: The aim of this study was to assess the magnitude of meconium-stained amniotic fluid and associated factors among women who gave birth in the North Shoa Zone Hospital’s Amhara Region, Ethiopia, in 2022. Methods: An institutional-based, cross-sectional study was conducted among 628 women who gave birth at North Shoa Zone Hospitals, Amhara, Ethiopia. The study was conducted from 08/June-08/August 2022. Two-stage cluster sampling was used to recruit study participants. The data was collected by using a structured interview-administered questionnaire and chart review. The collected data was entered into Epi-Data Version 4.6 and exported to SPSS Version 25. Logistics regression was employed, and a p-value <0.05 was considered significant. Result: The magnitude of meconium-stained amniotic fluid was 30.3%. Women presented with normal hematocrit level 83% less likely develop meconium-stained amniotic fluid. Women had mid-upper arm circumference value was less than 22.9cm(AOR=1.9; 95% CI;1.18-3.20), obstructed labor(AOR=3.6; 95% CI;1.48-8.83), prolonged labor ≥ 15hr (AOR=7.5; 95% CI ;7.68-13.3), the premature rapture of the membrane (AOR=1.7; 95% CI; 3.22-7.40), fetal tachycardia(AOR=6.2; 95% CI; 2.41-16.3) and Bradycardia (AOR=3.1; 95% CI;1.93-5.28) were significant association with meconium stained amniotic fluid. Conclusion: The magnitude of meconium-stained amniotic fluid, which was high. In this study, MUAC value <22.9 cm, obstructed and prolonged labor, PROM, bradycardia, and tachycardia were factors associated with meconium-stained amniotic fluid. A follow-up study and pooled similar articles will be mentioned for better evidence, enhancing intrapartum services and strengthening early detection of meconium-stained amniotic fluid for the health of the mother and baby.

Keywords: women, meconium-staned amniotic fluid, magnitude, Ethiopia

Procedia PDF Downloads 107
1643 Serious Digital Video Game for Solving Algebraic Equations

Authors: Liliana O. Martínez, Juan E González, Manuel Ramírez-Aranda, Ana Cervantes-Herrera

Abstract:

A serious game category mobile application called Math Dominoes is presented. The main objective of this applications is to strengthen the teaching-learning process of solving algebraic equations and is based on the board game "Double 6" dominoes. Math Dominoes allows the practice of solving first, second-, and third-degree algebraic equations. This application is aimed to students who seek to strengthen their skills in solving algebraic equations in a dynamic, interactive, and fun way, to reduce the risk of failure in subsequent courses that require mastery of this algebraic tool.

Keywords: algebra, equations, dominoes, serious games

Procedia PDF Downloads 111
1642 Effect of Soil Corrosion in Failures of Buried Gas Pipelines

Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur

Abstract:

In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.

Keywords: corrosion, pit depth, sensitivity analysis, exposure period

Procedia PDF Downloads 506
1641 Murine Pulmonary Responses after Sub-Chronic Exposure to Environmental Ultrafine Particles

Authors: Yara Saleh, Sebastien Antherieu, Romain Dusautoir, Jules Sotty, Laurent Alleman, Ludivine Canivet, Esperanza Perdrix, Pierre Dubot, Anne Platel, Fabrice Nesslany, Guillaume Garcon, Jean-Marc Lo-Guidice

Abstract:

Air pollution is one of the leading causes of premature death worldwide. Among air pollutants, particulate matter (PM) is a major health risk factor, through the induction of cardiopulmonary diseases and lung cancers. They are composed of coarse, fine and ultrafine particles (PM10, PM2.5, and PM0.1 respectively). Ultrafine particles are emerging unregulated pollutants that might have greater toxicity than larger particles, since they are more abundant and consequently have higher surface area per unit of mass. Our project aims to develop a relevant in vivo model of sub-chronic exposure to atmospheric particles in order to elucidate the specific respiratory impact of ultrafine particles compared to fine particulate matter. Quasi-ultrafine (PM0.18) and fine (PM2.5) particles have been collected in the urban industrial zone of Dunkirk in north France during a 7-month campaign, and submitted to physico-chemical characterization. BALB/c mice were then exposed intranasally to 10µg of PM0.18 or PM2.5 3 times a week. After 1 or 3-month exposure, broncho alveolar lavages (BAL) were performed and lung tissues were harvested for histological and transcriptomic analyses. The physico-chemical study of the collected particles shows that there is no major difference in elemental and surface chemical composition between PM0.18 and PM2.5. Furthermore, the results of the cytological analyses carried out show that both types of particulate fractions can be internalized in lung cells. However, the cell count in BAL and preliminary transcriptomic data suggest that PM0.18 could be more reactive and induce a stronger lung inflammation in exposed mice than PM2.5. Complementary studies are in progress to confirm these first data and to identify the metabolic pathways more specifically associated with the toxicity of ultrafine particles.

Keywords: environmental pollution, lung affect, mice, ultrafine particles

Procedia PDF Downloads 223
1640 Coffee Consumption and Glucose Metabolism: a Systematic Review of Clinical Trials

Authors: Caio E. G. Reis, Jose G. Dórea, Teresa H. M. da Costa

Abstract:

Objective: Epidemiological data shows an inverse association of coffee consumption with risk of type 2 diabetes mellitus. However, the clinical effects of coffee consumption on the glucose metabolism biomarkers remain controversial. Thus, this paper reviews clinical trials that evaluated the effects of coffee consumption on glucose metabolism. Research Design and Methods: We identified studies published until December 2014 by searching electronic databases and reference lists. We included randomized clinical trials which the intervention group received caffeinated and/or decaffeinated coffee and the control group received water or placebo treatments and measured biomarkers of glucose metabolism. The Jadad Score was applied to evaluate the quality of the studies whereas studies that scored ≥ 3 points were considered for the analyses. Results: Seven clinical trials (total of 237 subjects) were analyzed involving adult healthy, overweight and diabetic subjects. The studies were divided in short-term (1 to 3h) and long-term (2 to 16 weeks) duration. The results for short-term studies showed that caffeinated coffee consumption may increase the area under the curve for glucose response, while for long-term studies caffeinated coffee may improve the glycemic metabolism by reducing the glucose curve and increasing insulin response. These results seem to show that the benefits of coffee consumption occur in the long-term as has been shown in the reduction of type 2 diabetes mellitus risk in epidemiological studies. Nevertheless, until the relationship between long-term coffee consumption and type 2 diabetes mellitus is better understood and any mechanism involved identified, it is premature to make claims about coffee preventing type 2 diabetes mellitus. Conclusion: The findings suggest that caffeinated coffee may impairs glucose metabolism in short-term but in the long-term the studies indicate reduction of type 2 diabetes mellitus risk. More clinical trials with comparable methodology are needed to unravel this paradox.

Keywords: coffee, diabetes mellitus type 2, glucose, insulin

Procedia PDF Downloads 446
1639 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing

Authors: Leonie Bradfield, Stephen Fityus, John Simmons

Abstract:

The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.

Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump

Procedia PDF Downloads 151
1638 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates

Authors: Gaoyang Meng, Philip Harrison

Abstract:

When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.

Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification

Procedia PDF Downloads 71
1637 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 132
1636 Vancomycin Resistance Enterococcus and Implications to Trauma and Orthopaedic Care

Authors: O. Davies, K. Veravalli, P. Panwalkar, M. Tofighi, P. Butterick, B. Healy, A. Mofidi

Abstract:

Vancomycin resistant enterococcus infection is a condition that usually impacts ICUs, transplant, dialysis, and cancer units, often as a nosocomial infection. After an outbreak in the acute trauma and orthopaedic unit in Morriston hospital, we aimed to access the conditions that predispose VRE infections in our unit. Thirteen cases of VRE infection and five cases of VRE colonisations were identified in patients who were treated for orthopaedic care between 1/1/2020 and 1/11/2021. Cases were reviewed to identify predisposing factors, specifically looking at age, presenting condition and treatment, presence of infection and antibiotic care, active haemo-oncological condition, long term renal dialysis, previous hospitalisation, VRE predisposition, and clearance (PREVENT) scores, and outcome of care. The presenting condition, treatment, presence of postoperative infection, VRE scores, age was compared between colonised and the infected cohort. VRE type in both colonised and infection group was Enterococcus Faecium in all but one patient. The colonised group had the same age (T=0.6 P>0.05) and sex (2=0.115, p=0.74), presenting condition and treatment which consisted of peri-femoral fixation or arthroplasty in all patients. The infected group had one case of myelodysplasia and four cases of chronic renal failure requiring dialysis. All of the infected patient had sustained an infected complication of their fracture fixation or arthroplasty requiring reoperation and antibiotics. The infected group had an average VRE predisposition score of 8.5 versus the score of 3 in the colonised group (F=36, p<0.001). PREVENT score was 7 in the infected group and 2 in the colonised group(F=153, p<0.001). Six patients(55%) succumbed to their infection, and one VRE infection resulted in limb loss. In the orthopaedic cohort, VRE infection is a nosocomial condition that has peri-femoral predilection and is seen in association with immunosuppression or renal failure. The VRE infection cohort has been treated for infective complication of original surgery weeks prior to VRE infection. Based on our findings, we advise avoidance of infective complications, change of practice in use of antibiotics and use radical surgery and surveillance for VRE infections beyond infective precautions. PREVENT score shows that the infected group are unlikely to clear their VRE in the future but not the colonised group.

Keywords: surgical site infection, enterococcus, orthopaedic surgery, vancomycin resistance

Procedia PDF Downloads 122
1635 Production Structures of Energy Based on Water Force, Its Infrastructure Protection, and Possible Causes of Failure

Authors: Gabriela-Andreea Despescu, Mădălina-Elena Mavrodin, Gheorghe Lăzăroiu, Florin Adrian Grădinaru

Abstract:

The purpose of this paper is to contribute to the enhancement of a hydroelectric plant protection by coordinating protection measures and existing security and introducing new measures under a risk management process. Also, the plan identifies key critical elements of a hydroelectric plant, from its level vulnerabilities and threats it is subjected to in order to achieve the necessary protection measures to reduce the level of risk.

Keywords: critical infrastructure, risk analysis, critical infrastructure protection, vulnerability, risk management, turbine, impact analysis

Procedia PDF Downloads 528
1634 Economic Cost of Malaria: A Threat to Household Income in Nigeria

Authors: Nsikan Affiah, Kayode Osungbade, Williams Uzoma

Abstract:

Malaria remains one of the major killers of humans worldwide, threatening the lives of more than one-third of the world’s population. Some people refers it to; a disease of poverty because it contributes towards national poverty through its impact on foreign direct investment, tourism, labour productivity, and trade. At the micro level, it may cause poverty through spending on health care, income losses, and premature deaths. Unfortunately, malaria is a disease that affects both low-income household and its high-income counterpart, but low-income households are still at greater risk because significant part of the available monthly income is dedicated to various preventive and treatment measures. The objective of this study is to estimate direct and indirect cost of malaria treatment in households in a section of South-South Region (Akwa Ibom State) of Nigeria. A cross-sectional study of Six Hundred and Forty (640) heads of households or any adult representative of households in three local government areas of Akwa Ibom State, Nigeria from May 1-31, 2015 were ascertained through interviewer-administered questionnaire adapted from Nigerian Malaria Indicator Survey Report. The clustering technique was used to select 640 households with the help of Primary Health Care (PHC) house numbering system. Using exchange rate of 197 Naira/USD, result shows that direct cost of malaria treatment was 8,894.44 USD while the indirect cost of malaria treatment was 11,012.81 USD. Total cost of treatment made up of 44.7% direct cost and 55.3% indirect cost, with average direct cost of malaria treatment per household estimated at 20.6 USD and the average indirect cost of treatment per household estimated at 25.1 USD. Average total cost for each episode (888) of malaria was estimated at 22.4 USD. While at household level, the average total cost was estimated at 45.5 USD. From the average total cost, low-income households would spend 36% of monthly household income on treating malaria and the impact could be said to be catastrophic, compared to high-income households where only 1.2% of monthly household income is spent on malaria treatment. It could be concluded that the cost of malaria treatment is well beyond the means of households and given the reality of repeated bouts of malaria and its contribution to the impoverishment of households, there is a need for urgent action.

Keywords: direct cost, indirect cost, low income households, malaria

Procedia PDF Downloads 235
1633 Reliability Analysis of a Fuel Supply System in Automobile Engine

Authors: Chitaranjan Sharma

Abstract:

The present paper deals with the analysis of a fuel supply system in an automobile engine of a four wheeler which is having both the option of fuel i.e. PETROL and CNG. Since CNG is cheaper than petrol so the priority is given to consume CNG as compared to petrol. An automatic switch is used to start petrol supply at the time of failure of CNG supply. Using regenerative point technique with Markov renewal process, the reliability characteristics which are useful to system designers are obtained.

Keywords: reliability, redundancy, repair time, transition, probability, regenerative points, markov renewal, process

Procedia PDF Downloads 536
1632 Predicting Success and Failure in Drug Development Using Text Analysis

Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev

Abstract:

Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.

Keywords: data analysis, drug development, sentiment analysis, text-mining

Procedia PDF Downloads 132
1631 Calpains; Insights Into the Pathogenesis of Heart Failure

Authors: Mohammadjavad Sotoudeheian

Abstract:

Heart failure (HF) prevalence, as a global cardiovascular problem, is increasing gradually. A variety of molecular mechanisms contribute to HF. Proteins involved in cardiac contractility regulation, such as ion channels and calcium handling proteins, are altered. Additionally, epigenetic modifications and gene expression can lead to altered cardiac function. Moreover, inflammation and oxidative stress contribute to HF. The progression of HF can be attributed to mitochondrial dysfunction that impairs energy production and increases apoptosis. Molecular mechanisms such as these contribute to the development of cardiomyocyte defects and HF and can be therapeutically targeted. The heart's contractile function is controlled by cardiomyocytes. Calpain, and its related molecules, including Bax, VEGF, and AMPK, are among the proteins involved in regulating cardiomyocyte function. Apoptosis is facilitated by Bax. Cardiomyocyte apoptosis is regulated by this protein. Furthermore, cardiomyocyte survival, contractility, wound healing, and proliferation are all regulated by VEGF, which is produced by cardiomyocytes during inflammation and cytokine stress. Cardiomyocyte proliferation and survival are also influenced by AMPK, an enzyme that plays an active role in energy metabolism. They all play key roles in apoptosis, angiogenesis, hypertrophy, and metabolism during myocardial inflammation. The role of calpains has been linked to several molecular pathways. The calpain pathway plays an important role in signal transduction and apoptosis, as well as autophagy, endocytosis, and exocytosis. Cell death and survival are regulated by these calcium-dependent cysteine proteases that cleave proteins. As a result, protein fragments can be used for various cellular functions. By cleaving adhesion and motility proteins, calcium proteins also contribute to cell migration. HF may be brought about by calpain-mediated pathways. Many physiological processes are mediated by the calpain molecular pathways. Signal transduction, cell death, and cell migration are all regulated by these molecular pathways. Calpain is activated by calcium binding to calmodulin. In the presence of calcium, calmodulin activates calpain. Calpains are stimulated by calcium, which increases matrix metalloproteinases (MMPs). In order to develop novel treatments for these diseases, we must understand how this pathway works. A variety of myocardial remodeling processes involve calpains, including remodeling of the extracellular matrix and hypertrophy of cardiomyocytes. Calpains also play a role in maintaining cardiac homeostasis through apoptosis and autophagy. The development of HF may be in part due to calpain-mediated pathways promoting cardiomyocyte death. Numerous studies have suggested the importance of the Ca2+ -dependent protease calpain in cardiac physiology and pathology. Therefore, it is important to consider this pathway to develop and test therapeutic options in humans that targets calpain in HF. Apoptosis, autophagy, endocytosis, exocytosis, signal transduction, and disease progression all involve calpain molecular pathways. Therefore, it is conceivable that calpain inhibitors might have therapeutic potential as they have been investigated in preclinical models of several conditions in which the enzyme has been implicated that might be treated with them. Ca 2+ - dependent proteases and calpains contribute to adverse ventricular remodeling and HF in multiple experimental models. In this manuscript, we will discuss the calpain molecular pathway's important roles in HF development.

Keywords: calpain, heart failure, autophagy, apoptosis, cardiomyocyte

Procedia PDF Downloads 54
1630 Gender Differences in Objectively Assessed Physical Activity among Urban 15-Year-Olds

Authors: Marjeta Misigoj Durakovic, Maroje Soric, Lovro Stefan

Abstract:

Background and aim: Physical inactivity has been linked with increased morbidity and premature mortality and adolescence has been recognised as the critical period for a decline in physical activity (PA) level. In order to properly direct interventions aimed at increasing PA, high-risk groups of individuals should be identified. Therefore, the aim of this study is to describe gender differences in: a) PA level; b) weekly PA patterns. Methods: This investigation is a part of the CRO-PALS study which is an on-going longitudinal study conducted in a representative sample of urban youth in Zagreb (Croatia). CRO-PALS involves 903 adolescents and for the purpose of this study data from a subgroup of 190 participants with information on objective PA level were analysed (116 girls; mean age [SD]=15.6[0.3] years). Duration of moderate and vigorous PA was measured during 5 consecutive by a multiple-sensor physical activity monitor (SenseWear Armband, BodyMedia inc., Pittsburgh, USA). Gender differences in PA level were evaluated using independent samples t-test. Differences in school week and weekend levels of activity were assessed using mixed ANOVA with gender as between-subjects factor. The amount of vigorous PA had to be log-transformed to achieve normality in the distribution. Results: Boys were more active than girls. Duration of moderate-to-vigorous PA averaged 111±44 min/day in boys and 80±38 min/day in girls (mean difference=31 min/day, 95%CI=20-43 min/day). Vigorous PA was 2.5 times higher in boys compared to girls (95%CI=1.9-3.5). Participants were more active during school days than on weekends. The magnitude of the difference in moderate-to-vigorous PA was similar in both gender (p value for time*gender interaction = 0.79) and averaged 19 min/day (95%CI=11-27 min/day). Similarly, vigorous PA was 36% lower on weekends compared with school days (95%CI=22-46%) with no gender difference (p value for time*gender interaction = 0.52). Conclusion: PA level was higher in boys than in girls throughout the week. Still, in both boys and girls, the amount of PA reduced markedly on weekends compared with school days.

Keywords: adolescence, multiple-sensor physical activity monitor, physical activity level, weekly physical activity pattern

Procedia PDF Downloads 244
1629 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 269
1628 A Case Study of Determining the Times of Overhauls and the Number of Spare Parts for Repairable Items in Rolling Stocks with Simulation

Authors: Ji Young Lee, Jong Woon Kim

Abstract:

It is essential to secure high availability of railway vehicles to realize high quality and efficiency of railway service. Once the availability decreased, planned railway service could not be provided or more cars need to be reserved. additional cars need to be purchased or the frequency of railway service could be decreased. Such situation would be a big loss in terms of quality and cost related to railway service. Therefore, we make various efforts to get high availability of railway vehicles. Because it is a big loss to operators, we make various efforts to get high availability of railway vehicles. To secure high availability, the idle time of the vehicle needs to be reduced and the following methods are applied to railway vehicles. First, through modularization design, exchange time for line replaceable units is reduced which makes railway vehicles could be put into the service quickly. Second, to reduce periodic preventive maintenance time, preventive maintenance with short period would be proceeded test oriented to minimize the maintenance time, and reliability is secured through overhauls for each main component. With such design changes for railway vehicles, modularized components are exchanged first at the time of vehicle failure or overhaul so that vehicles could be put into the service quickly and exchanged components are repaired or overhauled. Therefore, spare components are required for any future failures or overhauls. And, as components are modularized and costs for components are high, it is considerably important to get reasonable quantities of spare components. Especially, when a number of railway vehicles were put into the service simultaneously, the time of overhauls come almost at the same time. Thus, for some vehicles, components need to be exchanged and overhauled before appointed overhaul period so that these components could be secured as spare parts for the next vehicle’s component overhaul. For this reason, components overhaul time and spare parts quantities should be decided at the same time. This study deals with the time of overhauls for repairable components of railway vehicles and the calculation of spare parts quantities in consideration of future failure/overhauls. However, as railway vehicles are used according to the service schedule, maintenance work cannot be proceeded after the service was closed thus it is quite difficult to resolve this situation mathematically. In this study, Simulation software system is used in this study for analyzing the time of overhauls for repairable components of railway vehicles and the spare parts for the railway systems.

Keywords: overhaul time, rolling stocks, simulation, spare parts

Procedia PDF Downloads 322
1627 Pakistan’s Counterinsurgency Operations: A Case Study of Swat

Authors: Arshad Ali

Abstract:

The Taliban insurgency in Swat which started apparently as a social movement in 2004 transformed into an anti-Pakistan Islamist insurgency by joining hands with the Tehrik-e-Taliban Pakistan (TTP) upon its formation in 2007. It quickly spread beyond Swat by 2009 making Swat the second stronghold of TTP after FATA. It prompted the Pakistan military to launch a full-scale counterinsurgency military operation code named Rah-i-Rast to regain the control of Swat. Operation Rah-i-Rast was successful not only in restoring the writ of the State but more importantly in creating a consensus against the spread of Taliban insurgency in Pakistan at political, social and military levels. This operation became a test case for civilian government and military to seek for a sustainable solution combating the TTP insurgency in the north-west of Pakistan. This study analyzes why the counterinsurgency operation Rah-i-Rast was successful and why the previous ones came into failure. The study also explores factors which created consensus against the Taliban insurgency at political and social level as well as reasons which hindered such a consensual approach in the past. The study argues that the previous initiatives failed due to various factors including Pakistan army’s lack of comprehensive counterinsurgency model, weak political will and public support, and states negligence. Also, the initial counterinsurgency policies were ad-hoc in nature fluctuating between military operations and peace deals. After continuous failure, the military revisited its approach to counterinsurgency in the operation Rah-i-Rast. The security forces learnt from their past experiences and developed a pragmatic counterinsurgency model: ‘clear, hold, build, and transfer.’ The military also adopted the population-centric approach to provide security to the local people. This case Study of Swat evaluates the strengths and weaknesses of the Pakistan's counterinsurgency operations as well as peace agreements. It will analyze operation Rah-i-Rast in the light of David Galula’s model of counterinsurgency. Unlike existing literature, the study underscores the bottom up approach adopted by the Pakistan’s military and government by engaging the local population to sustain the post-operation stability in Swat. More specifically, the study emphasizes on the hybrid counterinsurgency model “clear, hold, and build and Transfer” in Swat.

Keywords: Insurgency, Counterinsurgency, clear, hold, build, transfer

Procedia PDF Downloads 339
1626 A Social Network Analysis for Formulating Construction Defect Generation Mechanisms

Authors: Hamad Aljassmi, Sangwon Han

Abstract:

Various solutions for preventing construction defects have been suggested. However, a construction company may have difficulties adopting all these suggestions due to financial and practical constraints. Based on this recognition, this paper aims to identify the most significant defect causes and formulate their defect generation mechanism in order to help a construction company to set priorities of its defect prevention strategies. For this goal, we conducted a questionnaire survey of 106 industry professionals and identified five most significant causes including: (1) organizational culture, (2) time pressure and constraints, (3) workplace quality system, (4) financial constraints upon operational expenses and (5) inadequate employee training or learning opportunities.

Keywords: defect, quality, failure, risk

Procedia PDF Downloads 605
1625 Regulatory Frameworks and Bank Failure Prevention in South Africa: Assessing Effectiveness and Enhancing Resilience

Authors: Princess Ncube

Abstract:

In the context of South Africa's banking sector, the prevention of bank failures is of paramount importance to ensure financial stability and economic growth. This paper focuses on the role of regulatory frameworks in safeguarding the resilience of South African banks and mitigating the risks of failures. It aims to assess the effectiveness of existing regulatory measures and proposes strategies to enhance the resilience of financial institutions in the country. The paper begins by examining the specific regulatory frameworks in place in South Africa, including capital adequacy requirements, stress testing methodologies, risk management guidelines, and supervisory practices. It delves into the evolution of these measures in response to lessons learned from past financial crises and their relevance in the unique South African banking landscape. Drawing on empirical evidence and case studies specific to South Africa, this paper evaluates the effectiveness of regulatory frameworks in preventing bank failures within the country. It analyses the impact of these frameworks on crucial aspects such as early detection of distress signals, improvements in risk management practices, and advancements in corporate governance within South African financial institutions. Additionally, it explores the interplay between regulatory frameworks and the specific economic environment of South Africa, including the role of macroprudential policies in preventing systemic risks. Based on the assessment, this paper proposes recommendations to strengthen regulatory frameworks and enhance their effectiveness in bank failure prevention in South Africa. It explores avenues for refining existing regulations to align capital requirements with the risk profiles of South African banks, enhancing stress testing methodologies to capture specific vulnerabilities, and fostering better coordination among regulatory authorities within the country. Furthermore, it examines the potential benefits of adopting innovative approaches, such as leveraging technology and data analytics, to improve risk assessment and supervision in the South African banking sector.

Keywords: banks, resolution, liquidity, regulation

Procedia PDF Downloads 69
1624 Influence of Initial Curing Time, Water Content and Apparent Water Content on Geopolymer Modified Sludge Generated in Landslide Area

Authors: Minh Chien Vu, Tomoaki Satomi, Hiroshi Takahashi

Abstract:

As being lack of sufficient strength to support the loading of construction as well as service life cause the clay content and clay mineralogy, soft and highly compressible soils (sludge) constitute a major problem in geotechnical engineering projects. Geopolymer, a kind of inorganic polymer, is a promising material with a wide range of applications and offers a lower level of CO₂ emissions than conventional Portland cement. However, the feasibility of geopolymer in term of modified the soft and highly compressible soil has not been received much attention due to the requirement of heat treatment for activating the fly ash component and the existence of high content of clay-size particles in the composition of sludge that affected on the efficiency of the reaction. On the other hand, the geopolymer modified sludge could be affected by other important factors such as initial curing time, initial water content and apparent water content. Therefore, this paper describes a different potential application of geopolymer: soil stabilization in landslide areas to adapt to the technical properties of sludge so that heavy machines can move on. Sludge condition process is utilized to demonstrate the possibility for stabilizing sludge using fly ash-based geopolymer at ambient curing condition ( ± 20 °C) in term of failure strength, strain and bulk density. Sludge conditioning is a process whereby sludge is treated with chemicals or various other means to improve the dewatering characteristics of sludge before applying in the construction area. The effect of initial curing time, water content and apparent water content on the modification of sludge are the main focus of this study. Test results indicate that the initial curing time has potential for improving failure strain and strength of modified sludge with the specific condition of soft soil. The result further shows that the initial water content over than 50% total mass of sludge could significantly lead to a decrease of strength performance of geopolymer-based modified sludge. The optimum apparent water content of geopolymer modified sludge is strongly influenced by the amount of geopolymer content and initial water content of sludge. The solution to minimize the effect of high initial water content will be considered deeper in the future.

Keywords: landslide, sludge, fly ash, geopolymer, sludge conditioning

Procedia PDF Downloads 102
1623 Influence of Geometry on Performance of Type-4 Filament Wound Composite Cylinder for Compressed Gas Storage

Authors: Pranjali Sharma, Swati Neogi

Abstract:

Composite pressure vessels are low weight structures mainly used in a variety of applications such as automobiles, aeronautics and chemical engineering. Fiber reinforced polymer (FRP) composite materials offer the simplicity of design and use, high fuel storage capacity, rapid refueling capability, excellent shelf life, minimal infrastructure impact, high safety due to the inherent strength of the pressure vessel, and little to no development risk. Apart from these preliminary merits, the subsidized weight of composite vessels over metallic cylinders act as the biggest asset to the automotive industry, increasing the fuel efficiency. The result is a lightweight, flexible, non-explosive, and non-fragmenting pressure vessel that can be tailor-made to attune with specific applications. The winding pattern of the composite over-wrap is a primary focus while designing a pressure vessel. The critical stresses in the system depend on the thickness, angle and sequence of the composite layers. The composite over-wrap is wound over a plastic liner, whose geometry can be varied for the ease of winding. In the present study, we aim to optimize the FRP vessel geometry that provides an ease in winding and also aids in weight reduction for enhancing the vessel performance. Finite element analysis is used to study the effect of dome geometry, yielding a design with maximum value of burst pressure and least value of vessel weight. The stress and strain analysis of different dome ends along with the cylindrical portion is carried out in ANSYS 19.2. The failure is predicted using different failure theories like Tsai-Wu theory, Tsai-Hill theory and Maximum stress theory. Corresponding to a given winding sequence, the optimum dome geometry is determined for a fixed internal pressure to identify the theoretical value of burst pressure. Finally, this geometry is used to decrease the number of layers to reach the set value of safety in accordance with the available safety standards. This results in decrease in the weight of the composite over-wrap and manufacturing cost of the pressure vessel. An improvement in the overall weight performance of the pressure vessel gives higher fuel efficiency for its use in automobile applications.

Keywords: Compressed Gas Storage, Dome geometry, Theoretical Analysis, Type-4 Composite Pressure Vessel, Improvement in Vessel Weight Performance

Procedia PDF Downloads 129
1622 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania

Authors: Japhet N. Mwambusi

Abstract:

High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.

Keywords: climate change, deforestation, gluing technology, joint failure, wood-glue, wood species

Procedia PDF Downloads 223
1621 Deep Neck Infection Associated with Peritoneal Sepsis: A Rare Death Case

Authors: Sait Ozsoy, Asude Gokmen, Mehtap Yondem, Hanife A. Alkan, Gulnaz T. Javan

Abstract:

Deep neck infection often develops due to upper respiratory tract and odontogenic infections. Gastrointestinal System perforation can occur for many reasons and is in need of the early diagnosis and prompt surgical treatment. In both cases late or incorrect diagnosis may lead to increase morbidity and high mortality. A patient with a diagnosis of deep neck abscess died while under treatment due to sepsis and multiple organ failure. Autopsy finding showed duodenal ulcer and this is reported in the literature.

Keywords: peptic ulcer perforation, peritonitis, retropharyngeal abscess, sepsis

Procedia PDF Downloads 480
1620 Exponentiated Transmuted Weibull Distribution: A Generalization of the Weibull Probability Distribution

Authors: Abd El Hady N. Ebraheim

Abstract:

This paper introduces a new generalization of the two parameter Weibull distribution. To this end, the quadratic rank transmutation map has been used. This new distribution is named exponentiated transmuted Weibull (ETW) distribution. The ETW distribution has the advantage of being capable of modeling various shapes of aging and failure criteria. Furthermore, eleven lifetime distributions such as the Weibull, exponentiated Weibull, Rayleigh and exponential distributions, among others follow as special cases. The properties of the new model are discussed and the maximum likelihood estimation is used to estimate the parameters. Explicit expressions are derived for the quantiles. The moments of the distribution are derived, and the order statistics are examined.

Keywords: exponentiated, inversion method, maximum likelihood estimation, transmutation map

Procedia PDF Downloads 550
1619 Impact and Implementation of Privatization of State-Owned Enterprise Sustainability in Indonesia

Authors: Afri Ananda Nugroho

Abstract:

Privatization is one of the public policies closely related to the role of government in the economy due to the failure of the centralized system in the communist countries. This paper will discuss the basic issues of privatization as a global trend, the purpose of privatization, implementation, and impact on the success of State-Owned Enterprises (BUMN) in Indonesia. The analysis is done by looking at some important issues about the privatization problem, and some public policies are being applied such as why and how privatization is necessary and what impact it has. This paper also discusses the implications for top leaders of State-Owned Enterprises.

Keywords: privatization, state-owned enterprises, Indonesia, public policy

Procedia PDF Downloads 222
1618 An Extended Inverse Pareto Distribution, with Applications

Authors: Abdel Hadi Ebraheim

Abstract:

This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.

Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation

Procedia PDF Downloads 63