Search results for: failure probability
3081 Genetic Algorithm Based Node Fault Detection and Recovery in Distributed Sensor Networks
Authors: N. Nalini, Lokesh B. Bhajantri
Abstract:
In Distributed Sensor Networks, the sensor nodes are prone to failure due to energy depletion and some other reasons. In this regard, fault tolerance of network is essential in distributed sensor environment. Energy efficiency, network or topology control and fault-tolerance are the most important issues in the development of next-generation Distributed Sensor Networks (DSNs). This paper proposes a node fault detection and recovery using Genetic Algorithm (GA) in DSN when some of the sensor nodes are faulty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using GA, which is used to detect the faulty nodes in the network based on the energy depletion of node and link failure between nodes. The proposed fault detection model is used to detect faults at node level and network level faults (link failure and packet error). Finally, the performance parameters for the proposed scheme are evaluated.Keywords: distributed sensor networks, genetic algorithm, fault detection and recovery, information technology
Procedia PDF Downloads 4523080 Distance Education: Using a Digital Platform to Improve Struggling University Students' Mathematical Skills
Authors: Robert Vanderburg, Nicholas Gibson
Abstract:
Objectives: There has been an increased focus in education students’ mathematics skills in the last two years. Universities have, specifically, had problems teaching students struggling with mathematics. This paper focuses on the ability of a digital platform to significantly improve mathematics skills for struggling students. Methods: 32 students who demonstrated low scores on a mathematics test were selected to take part in a one-month tutorial program using a digital mathematics portal. Students were provided feedback for questions posted on the portal and a fortnightly tutorial session. Results: A pre-test post-test design was analyzed using a one-way analysis of variance (ANOVA). The analysis suggested that students improved skills in algebra, geometry, statistics, probability, ratios, fractions, and probability. Conclusion: Distance university students can improve their mathematics skills using a digital platform.Keywords: digital education, distance education, higher education, mathematics education
Procedia PDF Downloads 1873079 Drug Therapy Problems and Associated Factors among Patients with Heart Failure in the Medical Ward of Arba Minch General Hospital, Ethiopia
Authors: Debalke Dale, Bezabh Geneta, Yohannes Amene, Yordanos Bergene, Mohammed Yimam
Abstract:
Background: A drug therapy problem (DTP) is an event or circumstance that involves drug therapies that actually or potentially interfere with the desired outcome and requires professional judgment to resolve. Heart failure is an emerging worldwide threat whose prevalence and health loss burden constantly increase, especially in the young and in low-to-middle-income countries. There is a lack of population-based incidence and prevalence of heart failure (HF) studies in sub-Saharan African countries, including Ethiopia. Objective: The aim of this study was designed to assess drug therapy problems and associated factors among patients with HF in the medical ward of Arba Minch General Hospital(AGH), Ethiopia, from June 5 to August 20, 2022. Methods: A retrospective cross-sectional study was conducted among 180 patients with HF who were admitted to the medical ward of AGH. Data were collected from patients' cards by using questionnaires. The data were categorized and analyzed by using SPSS version 25.0 software, and data were presented in tables and words based on the nature of the data. Result: Out of the total, 85 (57.6%) were females, and 113 (75.3%) patients were aged over fifty years. Of the 150 study participants, 86 (57.3%) patients had at least one DTP identified, and a total of 116 DTPs were identified, which is 0.77 DTPs per patient. The most common types of DTP were unnecessary drug therapy (32%), followed by the need for additional drug therapy (36%), and dose too low (15%). Patients who used polypharmacy were 5.86 (AOR) times more likely to develop DTPs than those who did not (95% CI = 1.625–16.536, P = 0.005), and patients with more co-morbid conditions developed 3.68 (AOR) times more DTPs than those who had fewer co-morbidities (95% CI = 1.28–10.5, P = 0.015). Conclusion: The results of this study indicated that drug therapy problems were common among medical ward patients with heart failure. These problems are adversely affecting the treatment outcomes of patients, so it requires the special attention of healthcare professionals to optimize them.Keywords: heart failure, drug therapy problems, Arba Minch general hospital, Ethiopia
Procedia PDF Downloads 1073078 Cooperative Spectrum Sensing Using Hybrid IWO/PSO Algorithm in Cognitive Radio Networks
Authors: Deepa Das, Susmita Das
Abstract:
Cognitive Radio (CR) is an emerging technology to combat the spectrum scarcity issues. This is achieved by consistently sensing the spectrum, and detecting the under-utilized frequency bands without causing undue interference to the primary user (PU). In soft decision fusion (SDF) based cooperative spectrum sensing, various evolutionary algorithms have been discussed, which optimize the weight coefficient vector for maximizing the detection performance. In this paper, we propose the hybrid invasive weed optimization and particle swarm optimization (IWO/PSO) algorithm as a fast and global optimization method, which improves the detection probability with a lesser sensing time. Then, the efficiency of this algorithm is compared with the standard invasive weed optimization (IWO), particle swarm optimization (PSO), genetic algorithm (GA) and other conventional SDF based methods on the basis of convergence and detection probability.Keywords: cognitive radio, spectrum sensing, soft decision fusion, GA, PSO, IWO, hybrid IWO/PSO
Procedia PDF Downloads 4673077 Mecano-Reliability Coupled of Reinforced Concrete Structure and Vulnerability Analysis: Case Study
Authors: Kernou Nassim
Abstract:
The current study presents a vulnerability and a reliability-mechanical approach that focuses on evaluating the seismic performance of reinforced concrete structures to determine the probability of failure. In this case, the performance function reflecting the non-linear behavior of the structure is modeled by a response surface to establish an analytical relationship between the random variables (strength of concrete and yield strength of steel) and mechanical responses of the structure (inter-floor displacement) obtained by the pushover results of finite element simulations. The push over-analysis is executed by software SAP2000. The results acquired prove that properly designed frames will perform well under seismic loads. It is a comparative study of the behavior of the existing structure before and after reinforcement using the pushover method. The coupling indirect mechanical reliability by response surface avoids prohibitive calculation times. Finally, the results of the proposed approach are compared with Monte Carlo Simulation. The comparative study shows that the structure is more reliable after the introduction of new shear walls.Keywords: finite element method, surface response, reliability, reliability mechanical coupling, vulnerability
Procedia PDF Downloads 1173076 Performance Analysis of the Time-Based and Periodogram-Based Energy Detector for Spectrum Sensing
Authors: Sadaf Nawaz, Adnan Ahmed Khan, Asad Mahmood, Chaudhary Farrukh Javed
Abstract:
Classically, an energy detector is implemented in time domain (TD). However, frequency domain (FD) based energy detector has demonstrated an improved performance. This paper presents a comparison between the two approaches as to analyze their pros and cons. A detailed performance analysis of the classical TD energy-detector and the periodogram based detector is performed. Exact and approximate mathematical expressions for probability of false alarm (Pf) and probability of detection (Pd) are derived for both approaches. The derived expressions naturally lead to an analytical as well as intuitive reasoning for the improved performance of (Pf) and (Pd) in different scenarios. Our analysis suggests the dependence improvement on buffer sizes. Pf is improved in FD, whereas Pd is enhanced in TD based energy detectors. Finally, Monte Carlo simulations results demonstrate the analysis reached by the derived expressions.Keywords: cognitive radio, energy detector, periodogram, spectrum sensing
Procedia PDF Downloads 3783075 Stability Analysis of Slopes during Pile Driving
Authors: Yeganeh Attari, Gudmund Reidar Eiksund, Hans Peter Jostad
Abstract:
In Geotechnical practice, there is no standard method recognized by the industry to account for the reduction of safety factor of a slope as an effect of soil displacement and pore pressure build-up during pile installation. Pile driving disturbs causes large strains and generates excess pore pressures in a zone that can extend many diameters from the installed pile, resulting in a decrease of the shear strength of the surrounding soil. This phenomenon may cause slope failure. Moreover, dissipation of excess pore pressure set-up may cause weakening of areas outside the volume of soil remoulded during installation. Because of complex interactions between changes in mean stress and shearing, it is challenging to predict installation induced pore pressure response. Furthermore, it is a complex task to follow the rate and path of pore pressure dissipation in order to analyze slope stability. In cohesive soils it is necessary to implement soil models that account for strain softening in the analysis. In the literature, several cases of slope failure due to pile driving activities have been reported, for instance, a landslide in Gothenburg that resulted in a slope failure destroying more than thirty houses and Rigaud landslide in Quebec which resulted in loss of life. Up to now, several methods have been suggested to predict the effect of pile driving on total and effective stress, pore pressure changes and their effect on soil strength. However, this is still not well understood or agreed upon. In Norway, general approaches applied by geotechnical engineers for this problem are based on old empirical methods with little accurate theoretical background. While the limitations of such methods are discussed, this paper attempts to capture the reduction in the factor of safety of a slope during pile driving, using coupled Finite Element analysis and cavity expansion method. This is demonstrated by analyzing a case of slope failure due to pile driving in Norway.Keywords: cavity expansion method, excess pore pressure, pile driving, slope failure
Procedia PDF Downloads 1513074 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability
Authors: Kim Anema, Matthias Max, Chris Zevenbergen
Abstract:
For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.Keywords: community resilience, disaster response, social consequences, preparedness
Procedia PDF Downloads 3523073 Environmental Safety and Occupational Health Risk Assessment for Rocket Static Test
Authors: Phontip Kanlahasuth
Abstract:
This paper presents the environmental safety and occupational health risk assessment of rocket static test by assessing risk level from probability and severity and then appropriately applying the risk control measures. Before the environmental safety and occupational health measures are applied, the serious hazards level is 31%, medium level is 24% and low level is 45%. Once risk control measures are practically implemented, the serious hazard level can be diminished, medium level is 38%, low level is 45% and eliminated level is 17%. It is clearly shown that the environmental safety and occupational health measures can significantly reduce the risk level.Keywords: rocket static test, hazard, risk, risk assessment, risk analysis, environment, safety, occupational health, acceptable risk, probability, severity, risk level
Procedia PDF Downloads 5873072 Failure Simulation of Small-scale Walls with Chases Using the Lattic Discrete Element Method
Authors: Karina C. Azzolin, Luis E. Kosteski, Alisson S. Milani, Raquel C. Zydeck
Abstract:
This work aims to represent Numerically tests experimentally developed in reduced scale walls with horizontal and inclined cuts by using the Lattice Discrete Element Method (LDEM) implemented On de Abaqus/explicit environment. The cuts were performed with depths of 20%, 30%, and 50% On the walls subjected to centered and eccentric loading. The parameters used to evaluate the numerical model are its strength, the failure mode, and the in-plane and out-of-plane displacements.Keywords: structural masonry, wall chases, small scale, numerical model, lattice discrete element method
Procedia PDF Downloads 1783071 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability
Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto
Abstract:
Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT
Procedia PDF Downloads 7953070 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 3093069 Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea
Authors: Santanu Chattopadhyay, Gautam Sarkar, Arabinda Das
Abstract:
This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification.Keywords: arrhythmia, congestive heart failure, discrete wavelet transform, electrocardiogram, myocardial ischemia, sleep apnea
Procedia PDF Downloads 1353068 Structural Health Monitoring and Damage Structural Identification Using Dynamic Response
Authors: Reza Behboodian
Abstract:
Monitoring the structural health and diagnosing their damage in the early stages has always been one of the topics of concern. Nowadays, research on structural damage detection methods based on vibration analysis is very extensive. Moreover, these methods can be used as methods of permanent and timely inspection of structures and prevent further damage to structures. Non-destructive methods are the low-cost and economical methods for determining the damage of structures. In this research, a non-destructive method for detecting and identifying the failure location in structures based on dynamic responses resulting from time history analysis is proposed. When the structure is damaged due to the reduction of stiffness, and due to the applied loads, the displacements in different parts of the structure were increased. In the proposed method, the damage position is determined based on the calculation of the strain energy difference in each member of the damaged structure and the healthy structure at any time. Defective members of the structure are indicated by the amount of strain energy relative to the healthy state. The results indicated that the proper accuracy and performance of the proposed method for identifying failure in structures.Keywords: failure, time history analysis, dynamic response, strain energy
Procedia PDF Downloads 1333067 The Impact of Level and Consequence of Service Co-Recovery on Post-Recovery Satisfaction and Repurchase Intent
Authors: Chia-Ching Tsai
Abstract:
In service delivery, interpersonal interaction is the key to customer satisfaction, and apparently, the factor of human is critical in service delivery. Besides, customers quite care about the consequences of co-recovery. Thus, this research focuses on service failure caused by other customers and uses a 2x2 factorial design to investigate the impact of consequence and level of service co-recovery on post-recovery satisfaction and repurchase intent. 150 undergraduates were recruited as participants, and assigned to one of the four cells randomly. Every participant was requested to read the scenario and then rated the post-recovery satisfaction and repurchase intent. The results show that under the condition of failed co-recovery, level of co-recovery has no effect on post-recovery satisfaction, while under the condition of successful co-recovery, high-level co-recovery causes significantly higher post-recovery satisfaction than low-level co-recovery. Moreover, post-recovery satisfaction has significantly positive impact on repurchase intent. In the system of service delivery, customers interact with other customers frequently. Therefore, comparing with the literature, this research focuses on the service failure caused by other customers. This research also supplies a better understanding of customers’ view on consequences of different levels of co-recovery, which is helpful for the practitioners to make use of co-recovery.Keywords: service failure, service co-recovery, consequence of co-recovery, level of co-recovery, post-recovery satisfaction, repurchase intent
Procedia PDF Downloads 4213066 Cellular Automata Model for Car Accidents at a Signalized Intersection
Authors: Rachid Marzoug, Noureddine Lakouari, Beatriz Castillo Téllez, Margarita Castillo Téllez, Gerardo Alberto Mejía Pérez
Abstract:
This paper developed a two-lane cellular automata model to explain the relationship between car accidents at a signalized intersection and traffic-related parameters. It is found that the increase of the lane-changing probability P?ₕ? increases the risk of accidents, besides, the inflow α and the probability of accidents Pₐ? exhibit a nonlinear relationship. Furthermore, depending on the inflow, Pₐ? exhibits three different phases. The transition from phase I to phase II is of first (second) order when P?ₕ?=0 (P?ₕ?>0). However, the system exhibits a second (first) order transition from phase II to phase III when P?ₕ?=0 (P?ₕ?>0). In addition, when the inflow is not very high, the green light length of one road should be increased to improve road safety. Finally, simulation results show that the traffic at the intersection is safer adopting symmetric lane-changing rules than asymmetric ones.Keywords: two-lane intersection, accidents, fatality risk, lane-changing, phase transition
Procedia PDF Downloads 2203065 Evaluating the Probability of Foreign Tourists' Return to the City of Mashhad, Iran
Authors: Mohammad Rahim Rahnama, Amir Ali Kharazmi, Safiye Rokni
Abstract:
The tourism industry will be the most important unlimited, sustainable source of income after the oil and automotive industries by 2020 and not only countries, but cities are striving to apprehend its various facets. In line with this objective, the present descriptive-analytical study, through survey and using a questionnaire, seeks to evaluate the probability of tourists’ return and their recommendation to their countrymen to travel to Mashhad, Iran. The population under study is a sample of 384 foreign tourists who, in 2016, arrived at Mashhad, the second metropolis in Iran and its biggest religious city. The Kaplan-Meier estimator was used to analyze the data. Twenty-six percent of the tourists are female and 74% are male. On average, each tourist has had 3.02 trips abroad and 2.1 trips to Mashhad. Tourists from 14 different countries have arrived at Mashhad. Kuwait (15.9%), Armenia (15.6%), and Iraq (10.9%) were the countries where most tourists originated. Seventy-six percent of the tourists traveled with family and 90% of the tourists arrived at Mashhad via airplane. Major purposes of tourists’ trip include pilgrimage (27.9%), treatment (22.1%) followed by pilgrimage and treatment combined (35.4%). Major issues for tourists, in the order of priority, include quality of goods and services (30.2%), shopping (18%), and inhabitants’ treatment of foreigners (15.9%). Main tourist attractions, in addition to the Holy Shrine of Imam Reza, include Torqabeh and Shandiz (Torqabeh 40.9% and Shandiz 29.9%), Neyshabour (18.2%) followed by Kalat, 4.4%. The average willingness to return among tourists is 3.13, which is higher than the mean 3, indicating satisfaction with the stay in Mashhad. Similarly, the average for tourists’ recommending to their countrymen to visit Mashhad is 3.42, which is also an indicator of tourists’ satisfaction with their presence in Mashhad. According to the findings of the Kaplan-Meier estimator, an increase in the number of tourists’ trips to Mashhad, and an increase in the number of tourists’ foreign trips, reduces the probability of recommending a trip to Mashhad by tourists. Similarly, willingness to return is higher among those who stayed at a relatives’ home compared with other patterns of residence (hotels, self-catering accommodation, and pilgrim houses). Therefore, addressing the issues raised by tourists is essential for their return and their recommendation to others to travel to Mashhad.Keywords: international tourist, probability of return, satisfaction, Mashhad
Procedia PDF Downloads 1703064 Max-Entropy Feed-Forward Clustering Neural Network
Authors: Xiaohan Bookman, Xiaoyan Zhu
Abstract:
The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.Keywords: feed-forward neural network, clustering, max-entropy principle, probabilistic models
Procedia PDF Downloads 4353063 Analysing Maximum Power Point Tracking in a Stand Alone Photovoltaic System
Authors: Osamede Asowata
Abstract:
Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident in its low carbon emission and efficiency. Power failure or outage from commercial providers, in general, does not promote development to public and private sector; these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with maximum power point tracking (MPPT) from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0°N, with a corresponding tilt angle of 36°, 26°, and 16°. Preliminary results include regression analysis (normal probability plot) showing the maximum power point in the system as well the best tilt angle for maximum power point tracking.Keywords: poly-crystalline PV panels, solar chargers, tilt and orientation angles, maximum power point tracking, MPPT, Pulse Width Modulation (PWM).
Procedia PDF Downloads 1773062 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 2323061 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 2493060 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.Keywords: censored data, R statistical software, reliability analysis, time to failure
Procedia PDF Downloads 4013059 Probability Fuzzy Aggregation Operators in Vehicle Routing Problem
Authors: Anna Sikharulidze, Gia Sirbiladze
Abstract:
For the evaluation of unreliability levels of movement on the closed routes in the vehicle routing problem, the fuzzy operators family is constructed. The interactions between routing factors in extreme conditions on the roads are considered. A multi-criteria decision-making model (MCDM) is constructed. Constructed aggregations are based on the Choquet integral and the associated probability class of a fuzzy measure. Propositions on the correctness of the extension are proved. Connections between the operators and the compositions of dual triangular norms are described. The conjugate connections between the constructed operators are shown. Operators reflect interactions among all the combinations of the factors in the fuzzy MCDM process. Several variants of constructed operators are used in the decision-making problem regarding the assessment of unreliability and possibility levels of movement on closed routes.Keywords: vehicle routing problem, associated probabilities of a fuzzy measure, choquet integral, fuzzy aggregation operator
Procedia PDF Downloads 3263058 Using Finite Element to Predict Failure of Light Weight Bridges Due to Vehicles Impact: Case Study
Authors: Amin H. Almasria, Rajai Z. Alrousanb, Al-Harith Manasrah
Abstract:
The collapse of a light weight pedestrian bridges due to vehicle collision is investigated and studied in detail using a dynamic nonlinear finite element analysis. Typical bridge widely used in Jordan is studied and modeled under truck collision using one dimensional beam finite element in order to minimize analysis time due to the dynamic nature of the problem. Truck collision with the bridge is simulated at different speeds and locations of collisions using dynamic explicit finite element scheme with material nonlinearity taken into account. Energy absorption of bridge is investigated through principle of energy conservation, where truck kinetic energy is assumed to be stored in the bridge as strain energy. Weak failure points in the bridges were identified, and modifications are proposed in order to strengthen the bridge structure and prevent total collapse. The proposed design modifications on bridge structure were successful in allowing the bridge to fail locally rather than globally and expected to help in saving lives.Keywords: finite element method, dynamic impact, pedestrian bridges, strain energy, collapse failure
Procedia PDF Downloads 6243057 Shear Strengthening of RC T-Beams by Means of CFRP Sheets
Authors: Omar A. Farghal
Abstract:
This research aimed to experimentally and analytically investigate the contribution of bonded web carbon fiber reinforced polymer (CFRP) sheets to the shear strength of reinforced concrete (RC) T-beams. Two strengthening techniques using CFRP strips were applied along the shear-span zone: the first one is vertical U-jacket and the later is vertical strips bonded to the beam sides only. Fibers of both U-jacket and side sheets were vertically oriented (θ = 90°). Test results showed that the strengthening technique with U-jacket CFRP sheets improved the shear strength particularly. Three mechanisms of failure were recognized for the tested beams depending upon the end condition of the bonded CFRP sheet. Although the failure mode for the different beams was a brittle one, the strengthened beams provided with U-jacket CFRP sheets showed more or less a ductile behavior at a higher loading level up to a load level just before failure. As a consequence, these beams approved an acceptable enhancement in the structural ductility. Moreover, the obtained results concerning both the strains induced in the CFRP sheets and the maximum loads are used to study the applicability of the analytical models proposed in this study (ACI code) to predict: the nominal shear strength of the strengthened beams.Keywords: carbon fiber reinforced polymer, wrapping, ductility, shear strengthening
Procedia PDF Downloads 2553056 Downtime Modelling for the Post-Earthquake Building Assessment Phase
Authors: S. Khakurel, R. P. Dhakal, T. Z. Yeow
Abstract:
Downtime is one of the major sources (alongside damage and injury/death) of financial loss incurred by a structure in an earthquake. The length of downtime associated with a building after an earthquake varies depending on the time taken for the reaction (to the earthquake), decision (on the future course of action) and execution (of the decided course of action) phases. Post-earthquake assessment of buildings is a key step in the decision making process to decide the appropriate safety placarding as well as to decide whether a damaged building is to be repaired or demolished. The aim of the present study is to develop a model to quantify downtime associated with the post-earthquake building-assessment phase in terms of two parameters; i) duration of the different assessment phase; and ii) probability of different colour tagging. Post-earthquake assessment of buildings includes three stages; Level 1 Rapid Assessment including a fast external inspection shortly after the earthquake, Level 2 Rapid Assessment including a visit inside the building and Detailed Engineering Evaluation (if needed). In this study, the durations of all three assessment phases are first estimated from the total number of damaged buildings, total number of available engineers and the average time needed for assessing each building. Then, probability of different tag colours is computed from the 2010-11 Canterbury earthquake Sequence database. Finally, a downtime model for the post-earthquake building inspection phase is proposed based on the estimated phase length and probability of tag colours. This model is expected to be used for rapid estimation of seismic downtime within the Loss Optimisation Seismic Design (LOSD) framework.Keywords: assessment, downtime, LOSD, Loss Optimisation Seismic Design, phase length, tag color
Procedia PDF Downloads 1853055 Trajectories of Conduct Problems and Cumulative Risk from Early Childhood to Adolescence
Authors: Leslie M. Gutman
Abstract:
Conduct problems (CP) represent a major dilemma, with wide-ranging and long-lasting individual and societal impacts. Children experience heterogeneous patterns of conduct problems; based on the age of onset, developmental course and related risk factors from around age 3. Early childhood represents a potential window for intervention efforts aimed at changing the trajectory of early starting conduct problems. Using the UK Millennium Cohort Study (n = 17,206 children), this study (a) identifies trajectories of conduct problems from ages 3 to 14 years and (b) assesses the cumulative and interactive effects of individual, family and socioeconomic risk factors from ages 9 months to 14 years. The same factors according to three domains were assessed, including child (i.e., low verbal ability, hyperactivity/inattention, peer problems, emotional problems), family (i.e., single families, parental poor physical and mental health, large family size) and socioeconomic (i.e., low family income, low parental education, unemployment, social housing). A cumulative risk score for the child, family, and socioeconomic domains at each age was calculated. It was then examined how the cumulative risk scores explain variation in the trajectories of conduct problems. Lastly, interactive effects among the different domains of cumulative risk were tested. Using group-based trajectory modeling, four distinct trajectories were found including a ‘low’ problem group and three groups showing childhood-onset conduct problems: ‘school-age onset’; ‘early-onset, desisting’; and ‘early-onset, persisting’. The ‘low’ group (57% of the sample) showed a low probability of conducts problems, close to zero, from 3 to 14 years. The ‘early-onset, desisting’ group (23% of the sample) demonstrated a moderate probability of CP in early childhood, with a decline from 3 to 5 years and a low probability thereafter. The ‘early-onset, persistent’ group (8%) followed a high probability of conduct problems, which declined from 11 years but was close to 70% at 14 years. In the ‘school-age onset’ group, 12% of the sample showed a moderate probability of conduct problems from 3 and 5 years, with a sharp increase by 7 years, increasing to 50% at 14 years. In terms of individual risk, all factors increased the likelihood of being in the childhood-onset groups compared to the ‘low’ group. For cumulative risk, the socioeconomic domain at 9 months and 3 years, the family domain at all ages except 14 years and child domain at all ages were found to differentiate childhood-onset groups from the ‘low’ group. Cumulative risk at 9 months and 3 years did not differentiate between the ‘school-onset’ group and ‘low’ group. Significant interactions were found between the domains for the ‘early-onset, desisting group’ suggesting that low levels of risk in one domain may buffer the effects of high risk in another domain. The implications of these findings for preventive interventions will be highlighted.Keywords: conduct problems, cumulative risk, developmental trajectories, early childhood, adolescence
Procedia PDF Downloads 2513054 Genome-Wide Association Study Identify COL2A1 as a Susceptibility Gene for the Hand Development Failure of Kashin-Beck Disease
Authors: Feng Zhang
Abstract:
Kashin-Beck disease (KBD) is a chronic osteochondropathy. The mechanism of hand growth and development failure of KBD remains elusive now. In this study, we conducted a two-stage genome-wide association study (GWAS) of palmar length-width ratio (LWR) of KBD, totally involving 493 Chinese Han KBD patients. Affymetrix Genome Wide Human SNP Array 6.0 was applied for SNP genotyping. Association analysis was conducted by PLINK software. Imputation analysis was performed by IMPUTE against the reference panel of the 1000 genome project. In the GWAS, the most significant association was observed between palmar LWR and rs2071358 of COL2A1 gene (P value = 4.68×10-8). Imputation analysis identified 3 SNPs surrounding rs2071358 with significant or suggestive association signals. Replication study observed additional significant association signals at both rs2071358 (P value = 0.017) and rs4760608 (P value = 0.002) of COL2A1 gene after Bonferroni correction. Our results suggest that COL2A1 gene was a novel susceptibility gene involved in the growth and development failure of hand of KBD.Keywords: Kashin-Beck disease, genome-wide association study, COL2A1, hand
Procedia PDF Downloads 2203053 Optimal Continuous Scheduled Time for a Cumulative Damage System with Age-Dependent Imperfect Maintenance
Authors: Chin-Chih Chang
Abstract:
Many manufacturing systems suffer failures due to complex degradation processes and various environment conditions such as random shocks. Consider an operating system is subject to random shocks and works at random times for successive jobs. When successive jobs often result in production losses and performance deterioration, it would be better to do maintenance or replacement at a planned time. A preventive replacement (PR) policy is presented to replace the system before a failure occurs at a continuous time T. In such a policy, the failure characteristics of the system are designed as follows. Each job would cause a random amount of additive damage to the system, and the system fails when the cumulative damage has exceeded a failure threshold. Suppose that the deteriorating system suffers one of the two types of shocks with age-dependent probabilities: type-I (minor) shock is rectified by a minimal repair, or type-II (catastrophic) shock causes the system to fail. A corrective replacement (CR) is performed immediately when the system fails. In summary, a generalized maintenance model to scheduling replacement plan for an operating system is presented below. PR is carried out at time T, whereas CR is carried out when any type-II shock occurs and the total damage exceeded a failure level. The main objective is to determine the optimal continuous schedule time of preventive replacement through minimizing the mean cost rate function. The existence and uniqueness of optimal replacement policy are derived analytically. It can be seen that the present model is a generalization of the previous models, and the policy with preventive replacement outperforms the one without preventive replacement.Keywords: preventive replacement, working time, cumulative damage model, minimal repair, imperfect maintenance, optimization
Procedia PDF Downloads 3633052 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec
Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed
Abstract:
Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation
Procedia PDF Downloads 213