Search results for: probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1240

Search results for: probability

1000 The Integrated Strategy of Maintenance with a Scientific Analysis

Authors: Mahmoud Meckawey

Abstract:

This research is dealing with one of the most important aspects of maintenance fields, that is Maintenance Strategy. It's the branch which concerns the concepts and the schematic thoughts in how to manage maintenance and how to deal with the defects in the engineering products (buildings, machines, etc.) in general. Through the papers we will act with the followings: i) The Engineering Product & the Technical Systems: When we act with the maintenance process, in a strategic view, we act with an (engineering product) which consists of multi integrated systems. In fact, there is no engineering product with only one system. We will discuss and explain this topic, through which we will derivate a developed definition for the maintenance process. ii) The factors or basis of the functionality efficiency: That is the main factors affect the functional efficiency of the systems and the engineering products, then by this way we can give a technical definition of defects and how they occur. iii) The legality of occurrence of defects (Legal defects and Illegal defects): with which we assume that all the factors of the functionality efficiency been applied, and then we will discuss the results. iv) The Guarantee, the Functional Span Age and the Technical surplus concepts: In the complementation with the above topic, and associated with the Reliability theorems, where we act with the Probability of Failure state, with which we almost interest with the design stages, that is to check and adapt the design of the elements. But in Maintainability we act in a different way as we act with the actual state of the systems. So, we act with the rest of the story that means we have to act with the complementary part of the probability of failure term which refers to the actual surplus of the functionality for the systems.

Keywords: engineering product and technical systems, functional span age, legal and illegal defects, technical and functional surplus

Procedia PDF Downloads 475
999 The Probability of Smallholder Broiler Chicken Farmers' Participation in the Mainstream Market within Maseru District in Lesotho

Authors: L. E. Mphahama, A. Mushunje, A. Taruvinga

Abstract:

Although broiler production does not generate any large incomes among the smallholder community, it represents the main source of livelihood and part of nutritional requirement. As a result, market for broiler meat is growing faster than that of any other meat products and is projected to continue growing in the coming decades. However, the implication is that a multitude of factors manipulates transformation of smallholder broiler farmers participating in the mainstream markets. From 217 smallholder broiler farmers, socio-economic and institutional factors in broiler farming were incorporated into Binary model to estimate the probability of broiler farmers’ participation in the mainstream markets within the Maseru district in Lesotho. Of the thirteen (13) predictor variables fitted into the model, six (6) variables (household size, number of years in broiler business, stock size, access to transport, access to extension services and access to market information) had significant coefficients while seven (7) variables (level of education, marital status, price of broilers, poultry association, access to contract, access to credit and access to storage) did not have a significant impact. It is recommended that smallholder broiler farmers organize themselves into cooperatives which will act as a vehicle through which they can access contracts and formal markets. These cooperatives will also enable easy training and workshops for broiler rearing and marketing/markets through extension visits.

Keywords: broiler chicken, mainstream market, Maseru district, participation, smallholder farmers

Procedia PDF Downloads 152
998 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique

Authors: Sourour Chaabane, Davide Clematis, Marco Panizza

Abstract:

This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.

Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt

Procedia PDF Downloads 74
997 Learning a Bayesian Network for Situation-Aware Smart Home Service: A Case Study with a Robot Vacuum Cleaner

Authors: Eu Tteum Ha, Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

The smart home environment backed up by IoT (internet of things) technologies enables intelligent services based on the awareness of the situation a user is currently in. One of the convenient sensors for recognizing the situations within a home is the smart meter that can monitor the status of each electrical appliance in real time. This paper aims at learning a Bayesian network that models the causal relationship between the user situations and the status of the electrical appliances. Using such a network, we can infer the current situation based on the observed status of the appliances. However, learning the conditional probability tables (CPTs) of the network requires many training examples that cannot be obtained unless the user situations are closely monitored by any means. This paper proposes a method for learning the CPT entries of the network relying only on the user feedbacks generated occasionally. In our case study with a robot vacuum cleaner, the feedback comes in whenever the user gives an order to the robot adversely from its preprogrammed setting. Given a network with randomly initialized CPT entries, our proposed method uses this feedback information to adjust relevant CPT entries in the direction of increasing the probability of recognizing the desired situations. Simulation experiments show that our method can rapidly improve the recognition performance of the Bayesian network using a relatively small number of feedbacks.

Keywords: Bayesian network, IoT, learning, situation -awareness, smart home

Procedia PDF Downloads 524
996 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 491
995 Effectiveness of Variable Speed Limit Signs in Reducing Crash Rates on Roadway Construction Work Zones in Alaska

Authors: Osama Abaza, Tanay Datta Chowdhury

Abstract:

As a driver's speed increases, so do the probability of an incident and likelihood of injury. The presence of equipment, personnel, and a changing landscape in construction zones create greater potential for incident. This is especially concerning in Alaska, where summer construction activity, coinciding with the peak annual traffic volumes, cannot be avoided. In order to reduce vehicular speeding in work zones, and therefore the probability of crash and incident occurrence, variable speed limit (VSL) systems can be implemented in the form of radar speed display trailers since the radar speed display trailers were shown to be effective at reducing vehicular speed in construction zones. Allocation of VSL not only help reduce the 85th percentile speed but also it will predominantly reduce mean speed as well. Total of 2147 incidents along with 385 crashes occurred only in one month around the construction zone in the Alaska which seriously requires proper attention. This research provided a thorough crash analysis to better understand the cause and provide proper countermeasures. Crashes were predominantly recoded as vehicle- object collision and sideswipe type and thus significant amount of crashes fall in the group of no injury to minor injury type in the severity class. But still, 35 major crashes with 7 fatal ones in a one month period require immediate action like the implementation of the VSL system as it proved to be a speed reducer in the construction zone on Alaskan roadways.

Keywords: speed, construction zone, crash, severity

Procedia PDF Downloads 253
994 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.

Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection

Procedia PDF Downloads 256
993 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.

Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment

Procedia PDF Downloads 229
992 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life

Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi

Abstract:

Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.

Keywords: reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model

Procedia PDF Downloads 457
991 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 152
990 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 235
989 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals

Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde

Abstract:

About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.

Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability

Procedia PDF Downloads 377
988 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting

Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan

Abstract:

El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.

Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index

Procedia PDF Downloads 156
987 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement

Authors: Chao Xu

Abstract:

Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.

Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis

Procedia PDF Downloads 354
986 Enhancing the Pricing Expertise of an Online Distribution Channel

Authors: Luis N. Pereira, Marco P. Carrasco

Abstract:

Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.

Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics

Procedia PDF Downloads 235
985 Risk Analysis of Leaks from a Subsea Oil Facility Based on Fuzzy Logic Techniques

Authors: Belén Vinaixa Kinnear, Arturo Hidalgo López, Bernardo Elembo Wilasi, Pablo Fernández Pérez, Cecilia Hernández Fuentealba

Abstract:

The expanded use of risk assessment in legislative and corporate decision-making has increased the role of expert judgement in giving data for security-related decision-making. Expert judgements are required in most steps of risk assessment: danger recognizable proof, hazard estimation, risk evaluation, and examination of choices. This paper presents a fault tree analysis (FTA), which implies a probabilistic failure analysis applied to leakage of oil in a subsea production system. In standard FTA, the failure probabilities of items of a framework are treated as exact values while evaluating the failure probability of the top event. There is continuously insufficiency of data for calculating the failure estimation of components within the drilling industry. Therefore, fuzzy hypothesis can be used as a solution to solve the issue. The aim of this paper is to examine the leaks from the Zafiro West subsea oil facility by using fuzzy fault tree analysis (FFTA). As a result, the research has given theoretical and practical contributions to maritime safety and environmental protection. It has been also an effective strategy used traditionally in identifying hazards in nuclear installations and power industries.

Keywords: expert judgment, probability assessment, fault tree analysis, risk analysis, oil pipelines, subsea production system, drilling, quantitative risk analysis, leakage failure, top event, off-shore industry

Procedia PDF Downloads 191
984 Effects of Family Order and Informal Social Control on Protecting against Child Maltreatment: A Comparative Study of Seoul and Kathmandu

Authors: Thapa Sirjana, Clifton R. Emery

Abstract:

This paper examines the family order and Informal Social Control (ISC) by the extended families as a protective factor against Child Maltreatment. The findings are discussed using the main effects and the interaction effects of family order and informal social control by the extended families. The findings suggest that IPV mothers are associated with child abuse and child neglect. The children are neglected in the home more and physical abuse occurs in the case, if mothers are abused by their husbands. The mother’s difficulties of being abused may lead them to neglect their children. The findings suggest that ‘family order’ is a significant protective factor against child maltreatment. The results suggest that if the family order is neither too high nor too low than that can play a role as a protective factor. Soft type of ISC is significantly associated with child maltreatment. This study suggests that the soft type of ISC by the extended families is a helpful approach to develop child protection in both the countries. This study is analyzed the data collected from Seoul and Kathmandu families and neighborhood study (SKFNS). Random probability cluster sample of married or partnered women in 20 Kathmandu wards and in Seoul 34 dongs were selected using probability proportional to size (PPS) sampling. Overall, the study is to make a comparative study of Korea and Nepal and examine how the cultural differences and similarities associate with the child maltreatment.

Keywords: child maltreatment, intimate partner violence, informal social control and family order Seoul, Kathmandu

Procedia PDF Downloads 248
983 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population

Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez

Abstract:

The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.

Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability

Procedia PDF Downloads 115
982 Corrosion Risk Assessment/Risk Based Inspection (RBI)

Authors: Lutfi Abosrra, Alseddeq Alabaoub, Nuri Elhaloudi

Abstract:

Corrosion processes in the Oil & Gas industry can lead to failures that are usually costly to repair, costly in terms of loss of contaminated product, in terms of environmental damage and possibly costly in terms of human safety. This article describes the results of the corrosion review and criticality assessment done at Mellitah Gas (SRU unit) for pressure equipment and piping system. The information gathered through the review was intended for developing a qualitative RBI study. The corrosion criticality assessment has been carried out by applying company procedures and industrial recommended practices such as API 571, API 580/581, ASME PCC 3, which provides a guideline for establishing corrosion integrity assessment. The corrosion review is intimately related to the probability of failure (POF). During the corrosion study, the process units are reviewed by following the applicable process flow diagrams (PFDs) in the presence of Mellitah’s personnel from process engineering, inspection, and corrosion/materials and reliability engineers. The expected corrosion damage mechanism (internal and external) was identified, and the corrosion rate was estimated for every piece of equipment and corrosion loop in the process units. A combination of both Consequence and Likelihood of failure was used for determining the corrosion risk. A qualitative consequence of failure (COF) for each individual item was assigned based on the characteristics of the fluid as per its flammability, toxicity, and pollution into three levels (High, Medium, and Low). A qualitative probability of failure (POF)was applied to evaluate the internal and external degradation mechanism, a high-level point-based (0 to 10) for the purpose of risk prioritizing in the range of Low, Medium, and High.

Keywords: corrosion, criticality assessment, RBI, POF, COF

Procedia PDF Downloads 82
981 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables

Authors: Edokpa Idemudia Waziri, Salisu S. Umar

Abstract:

The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.

Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter

Procedia PDF Downloads 422
980 From Responses of Macroinvertebrate Metrics to the Definition of Reference Thresholds

Authors: Hounyèmè Romuald, Mama Daouda, Argillier Christine

Abstract:

The present study focused on the use of benthic macrofauna to define the reference state of an anthropized lagoon (Nokoué-Benin) from the responses of relevant metrics to proxies. The approach used is a combination of a joint species distribution model and Bayesian networks. The joint species distribution model was used to select the relevant metrics and generate posterior probabilities that were then converted into posterior response probabilities for each of the quality classes (pressure levels), which will constitute the conditional probability tables allowing the establishment of the probabilistic graph representing the different causal relationships between metrics and pressure proxies. For the definition of the reference thresholds, the predicted responses for low-pressure levels were read via probability density diagrams. Observations collected during high and low water periods spanning 03 consecutive years (2004-2006), sampling 33 macroinvertebrate taxa present at all seasons and sampling points, and measurements of 14 environmental parameters were used as application data. The study demonstrated reliable inferences, selection of 07 relevant metrics and definition of quality thresholds for each environmental parameter. The relevance of the metrics as well as the reference thresholds for ecological assessment despite the small sample size, suggests the potential for wider applicability of the approach for aquatic ecosystem monitoring and assessment programs in developing countries generally characterized by a lack of monitoring data.

Keywords: pressure proxies, bayesian inference, bioindicators, acadjas, functional traits

Procedia PDF Downloads 84
979 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 208
978 Estimation of Effective Radiation Dose Following Computed Tomography Urography at Aminu Kano Teaching Hospital, Kano Nigeria

Authors: Idris Garba, Aisha Rabiu Abdullahi, Mansur Yahuza, Akintade Dare

Abstract:

Background: CT urography (CTU) is efficient radiological examination for the evaluation of the urinary system disorders. However, patients are exposed to a significant radiation dose which is in a way associated with increased cancer risks. Objectives: To determine Computed Tomography Dose Index following CTU, and to evaluate organs equivalent doses. Materials and Methods: A prospective cohort study was carried at a tertiary institution located in Kano northwestern. Ethical clearance was sought and obtained from the research ethics board of the institution. Demographic, scan parameters and CT radiation dose data were obtained from patients that had CTU procedure. Effective dose, organ equivalent doses, and cancer risks were estimated using SPSS statistical software version 16 and CT dose calculator software. Result: A total of 56 patients were included in the study, consisting of 29 males and 27 females. The common indication for CTU examination was found to be renal cyst seen commonly among young adults (15-44yrs). CT radiation dose values in DLP, CTDI and effective dose for CTU were 2320 mGy cm, CTDIw 9.67 mGy and 35.04 mSv respectively. The probability of cancer risks was estimated to be 600 per a million CTU examinations. Conclusion: In this study, the radiation dose for CTU is considered significantly high, with increase in cancer risks probability. Wide radiation dose variations between patient doses suggest that optimization is not fulfilled yet. Patient radiation dose estimate should be taken into consideration when imaging protocols are established for CT urography.

Keywords: CT urography, cancer risks, effective dose, radiation exposure

Procedia PDF Downloads 345
977 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 200
976 Comprehensive Approach to Control Virus Infection and Energy Consumption in An Occupant Classroom

Authors: SeyedKeivan Nateghi, Jan Kaczmarczyk

Abstract:

People nowadays spend most of their time in buildings. Accordingly, maintaining a good quality of indoor air is very important. New universal matters related to the prevalence of Covid-19 also highlight the importance of indoor air conditioning in reducing the risk of virus infection. Cooling and Heating of a house will provide a suitable zone of air temperature for residents. One of the significant factors in energy demand is energy consumption in the building. In general, building divisions compose more than 30% of the world's fundamental energy requirement. As energy demand increased, greenhouse effects emerged that caused global warming. Regardless of the environmental damage to the ecosystem, it can spread infectious diseases such as malaria, cholera, or dengue to many other parts of the world. With the advent of the Covid-19 phenomenon, the previous instructions to reduce energy consumption are no longer responsive because they increase the risk of virus infection among people in the room. Two problems of high energy consumption and coronavirus infection are opposite. A classroom with 30 students and one teacher in Katowice, Poland, considered controlling two objectives simultaneal. The probability of transmission of the disease is calculated from the carbon dioxide concentration of people. Also, in a certain period, the amount of energy consumption is estimated by EnergyPlus. The effect of three parameters of number, angle, and time or schedule of opening windows on the probability of infection transmission and energy consumption of the class were investigated. Parameters were examined widely to determine the best possible condition for simultaneous control of infection spread and energy consumption. The number of opening windows is discrete (0,3), and two other parameters are continuous (0,180) and (8 AM, 2 PM). Preliminary results show that changes in the number, angle, and timing of window openings significantly impact the likelihood of virus transmission and class energy consumption. The greater the number, tilt, and timing of window openings, the less likely the student will transmit the virus. But energy consumption is increasing. When all the windows were closed at all hours of the class, the energy consumption for the first day of January was only 0.2 megajoules. In comparison, the probability of transmitting the virus per person in the classroom is more than 45%. But when all windows were open at maximum angles during class, the chance of transmitting the infection was reduced to 0.35%. But the energy consumption will be 36 megajoules. Therefore, school classrooms need an optimal schedule to control both functions. In this article, we will present a suitable plan for the classroom with natural ventilation through windows to control energy consumption and the possibility of infection transmission at the same time.

Keywords: Covid-19, energy consumption, building, carbon dioxide, energyplus

Procedia PDF Downloads 102
975 Evolutionary Analysis of Green Credit Regulation on Greenwashing Behavior in Dual-Layer Network

Authors: Bo-wen Zhu, Bin Wu, Feng Chen

Abstract:

It has become a common measure among governments to support green development of enterprises through Green Credit policies. In China, the Central Bank of China and other authorities even put forward corresponding assessment requirements for proportion of green credit in commercial banks. Policy changes might raise concerns about commercial banks turning a blind eye to greenwashing behavior by enterprises. The lack of effective regulation may lead to a diffusion of such behavior, and eventually result in the phenomenon of “bad money driving out good money”, which could dampen the incentive effect of Green Credit policies. This paper employs a complex network model based on an evolutionary game analysis framework involving enterprises, banks, and regulatory authorities to investigate inhibitory effect of the Green Credit regulation on enterprises’ greenwashing behavior, banks’ opportunistic and collusive behaviors. The findings are as follows: (1) Banking opportunism rises with Green Credit evaluation criteria and requirements for the proportion of credit balance. Restrictive regulation against violating banks is necessary as there is an increasing trend of banks adopting opportunistic strategy. (2) Raising penalties and probability of regulatory inspections can effectively suppress banks’ opportunistic behavior, however, it cannot entirely eradicate the opportunistic behavior on the bank side. (3) Although maintaining a certain inspection probability can inhibit enterprises from adopting greenwashing behavior, enterprises choose a catering production strategy instead. (4) One-time rewards from local government have limited effects on the equilibrium state and diffusion trend of bank regulatory decision-making.

Keywords: green credit, greenwashing behavior, regulation, diffusion effect

Procedia PDF Downloads 28
974 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 174
973 Influence of Travel Time Reliability on Elderly Drivers Crash Severity

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.

Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling

Procedia PDF Downloads 495
972 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records

Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar

Abstract:

Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using incremental dynamic analysis under near- and far-field records. For this purpose, IDA analyses of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.

Keywords: IDA, failure curve, directivity, maximum floor drift, fling step, evaluating probable bending of frames, near-field and far-field earthquake records

Procedia PDF Downloads 109
971 Sustainability of Heritage Management in Aksum: Focus on Heritage Conservation and Interpretation

Authors: Gebrekiros Welegebriel Asfaw

Abstract:

The management of the fragile, unique and irreplaceable cultural heritage from different perspectives is becoming a major challenge as important elements of culture are vanishing throughout the globe. The major purpose of this study is to assess how the cultural heritages of Aksum are managed for their future sustainability from heritage conservation and interpretation perspectives. Descriptive type of research design inculcating both quantitative and qualitative research methods is employed. Primary quantitative data was collected from 189 respondents (19 professionals, 88 tourism service providers and 82 tourists) and interview was conducted with 33 targeted informants from heritage and related professions, security employees, local community, service providers and church representatives by applying probability and non probability sampling methods. Findings of the study reveal that the overall sustainable management status of the cultural heritage of Aksum is below average. It is found that the sustainability of cultural heritage management in Aksum is facing a lot of unfavorable factors like lack of long term planning, incompatible system of heritage administration, limited capacity and number of professionals, scant attention to community based heritage and tourism development, dirtiness and drainage problems, problems with stakeholder involvement and cooperation, lack of organized interpretation and presentation systems and others. So, re-organization of the management system, creating platform for coordination among stakeholders and developing appropriate interpretation system can be good remedies. Introducing community based heritage and tourism development concept is also recommendable for a long term win-win success in Aksum.

Keywords: Aksum, conservation, interpretation, Sustainable Cultural Heritage Management

Procedia PDF Downloads 325