Search results for: reliability verification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2401

Search results for: reliability verification

2011 Enhancement of Thermal Performance of Latent Heat Solar Storage System

Authors: Rishindra M. Sarviya, Ashish Agrawal

Abstract:

Solar energy is available abundantly in the world, but it is not continuous and its intensity also varies with time. Due to above reason the acceptability and reliability of solar based thermal system is lower than conventional systems. A properly designed heat storage system increases the reliability of solar thermal systems by bridging the gap between the energy demand and availability. In the present work, two dimensional numerical simulation of the melting of heat storage material is presented in the horizontal annulus of double pipe latent heat storage system. Longitudinal fins were used as a thermal conductivity enhancement. Paraffin wax was used as a heat-storage or phase change material (PCM). Constant wall temperature is applied to heat transfer tube. Presented two-dimensional numerical analysis shows the movement of melting front in the finned cylindrical annulus for analyzing the thermal behavior of the system during melting.

Keywords: latent heat, numerical study, phase change material, solar energy

Procedia PDF Downloads 286
2010 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials

Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié

Abstract:

Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.

Keywords: bio-based materials, mould growth, numerical prediction, reliability approach

Procedia PDF Downloads 11
2009 Reliability of Self-Reported Language Proficiency Measures in l1 Attrition Research: A Closer Look at the Can-Do-Scales.

Authors: Anastasia Sorokina

Abstract:

Self-reported language proficiency measures have been widely used by researchers and have been proven to be an accurate tool to assess actual language proficiency. L1 attrition researchers also rely on self-reported measures. More specifically, can-do-scales has gained popularity in the discipline of L1 attrition research. The can-do-scales usually contain statements about language (e.g., “I can write e-mails”); participants are asked to rate each statement on a scale from 1 (I cannot do it at all) to 5 (I can do it without any difficulties). Despite its popularity, no studies have examined can-do-scales’ reliability at measuring the actual level of L1 attrition. Do can-do-scales positively correlate with lexical diversity, syntactic complexity, and fluency? The present study analyzed speech samples of 35 Russian-English attriters to examine whether their self-reported proficiency correlates with their actual L1 proficiency. The results of Pearson correlation demonstrated that can-do-scales correlated with lexical diversity, syntactic complexity, and fluency. These findings provide a valuable contribution to the L1 attrition research by demonstrating that can-do-scales can be used as a reliable tool to measure L1 attrition.

Keywords: L1 attrition, can-do-scales, lexical diversity, syntactic complexity

Procedia PDF Downloads 210
2008 Development and Validation of Employee Trust Scale: Factor Structure, Reliability and Validity

Authors: Chua Bee Seok, Getrude Cosmas, Jasmine Adela Mutang, Shazia Iqbal Hashmi

Abstract:

The aims of this study were to determine the factor structure and psychometric properties (i.e., reliability and convergent validity) of the employees trust scale, a newly created instrument by the researchers. The employees trust scale initially contained 82 items to measure employee’s trust toward their supervisors. A sample of 818 (343 females, 449 males) employees were selected randomly from public and private organization sectors in Kota Kinabalu, Sabah, Malaysia. Their ages ranged from 19 to 67 years old with the mean of 34.55 years old. Their average tenure with their current employer was 11.2 years (s.d. = 7.5 years). The respondents were asked to complete the employees trust scale, as well as a managerial trust questionnaire from Mishra. The exploratory factor analysis on employee’s trust toward their supervisor’s extracted three factors, labeled 'trustworthiness' (32 items), 'position status' (11 items) and 'relationship' (6 items) which accounted for 62.49% of the total variance. Trustworthiness factors were re-categorized into three sub factors: competency (11 items), benevolence (8 items) and integrity (13 items). All factors and sub factors of the scales demonstrated clear reliability with internal consistency of Cronbach’s Alpha above 0.85. The convergent validity of the Scale was supported by an expected pattern of correlations (positive and significant correlation) between the score of all factors and sub factors of the scale and the score on the managerial trust questionnaire which measured the same construct. The convergent validity of employees trust scale was further supported by the significant and positive inter correlation between the factors and sub factors of the scale. The results suggest that the employees trust scale is a reliable and valid measure. However, further studies need to be carried out in other groups of sample as to further validate the Scale.

Keywords: employees trust scale, psychometric properties, trustworthiness, position status, relationship

Procedia PDF Downloads 437
2007 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 151
2006 Analysis of Fault Tolerance on Grid Computing in Real Time Approach

Authors: Parampal Kaur, Deepak Aggarwal

Abstract:

In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.

Keywords: computational grid, fault tolerance, task replication, job scheduling

Procedia PDF Downloads 413
2005 A Study on the Accelerated Life Cycle Test Method of the Motor for Home Appliances by Using Acceleration Factor

Authors: Youn-Sung Kim, Mi-Sung Kim, Jae-Kun Lee

Abstract:

This paper deals with the accelerated life cycle test method of the motor for home appliances that demand high reliability. Life Cycle of parts in home appliances also should be 10 years because life cycle of the home appliances such as washing machine, refrigerator, TV is at least 10 years. In case of washing machine, the life cycle test method of motor is advanced for 3000 cycle test (1cycle = 2hours). However, 3000 cycle test incurs loss for the time and cost. Objectives of this study are to reduce the life cycle test time and the number of test samples, which could be realized by using acceleration factor for the test time and reduction factor for the number of sample.

Keywords: accelerated life cycle test, motor reliability test, motor for washing machine, BLDC motor

Procedia PDF Downloads 606
2004 Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis

Authors: Chang-Jen Lan

Abstract:

Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as X

Keywords: reliability analysis, level of service, intersection critical degree of saturation, risk based index

Procedia PDF Downloads 115
2003 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running

Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp

Abstract:

Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.

Keywords: reliability, running, sagittal plane, two dimensional

Procedia PDF Downloads 171
2002 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition

Authors: H. Mousavi, M. Sharifi, H. Pourvaziri

Abstract:

Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.

Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation

Procedia PDF Downloads 392
2001 Progress in Accuracy, Reliability and Safety in Firedamp Detection

Authors: José Luis Lorenzo Bayona, Ljiljana Medic-Pejic, Isabel Amez Arenillas, Blanca Castells Somoza

Abstract:

The communication presents the study results carried out by the Official Laboratory J. M. Madariaga (LOM) of the Polytechnic University of Madrid to analyze the reliability of methane detection systems used in underground mining. Poor firedamp control in work can cause from production stoppages to fatal accidents and since there is currently a great variety of equipment with different functional characteristics, a study is needed to indicate which measurement principles have the highest degree of confidence. For the development of the project, a series of fixed, transportable and portable methane detectors with different measurement principles have been selected to subject them to laboratory tests following the methods described in the applicable regulations. The test equipment has been the one usually used in the certification and calibration of these devices, subject to the LOM quality system, and the tests have been carried out on detectors accessible in the market. The conclusions establish the main advantages and disadvantages of the equipment according to the measurement principle used; catalytic combustion, interferometry and infrared absorption.

Keywords: ATEX standards, gas detector, methane meter, mining safety

Procedia PDF Downloads 113
2000 The Happiness Pulse: A Measure of Individual Wellbeing at a City Scale, Development and Validation

Authors: Rosemary Hiscock, Clive Sabel, David Manley, Sam Wren-Lewis

Abstract:

As part of the Happy City Index Project, Happy City have developed a survey instrument to measure experienced wellbeing: how people are feeling and functioning in their everyday lives. The survey instrument, called the Happiness Pulse, was developed in partnership with the New Economics Foundation (NEF) with the dual aim of collecting citywide wellbeing data and engaging individuals and communities in the measurement and promotion of their own wellbeing. The survey domains and items were selected through a review of the academic literature and a stakeholder engagement process, including local policymakers, community organisations and individuals. The Happiness Pulse was included in the Bristol pilot of the Happy City Index (n=722). The experienced wellbeing items were subjected to factor analysis. A reduced number of items to be included in a revised scale for future data collection were again entered into a factor analysis. These revised factors were tested for reliability and validity. Among items to be included in a revised scale for future data collection three factors emerged: Be, Do and Connect. The Be factor had good reliability, convergent and criterion validity. The Do factor had good discriminant validity. The Connect factor had adequate reliability and good discriminant and criterion validity. Some age, gender and socioeconomic differentiation was found. The properties of a new scale to measure experienced wellbeing, intended for use by municipal authorities, are described. Happiness Pulse data can be combined with local data on wellbeing conditions to determine what matters for peoples wellbeing across a city and why.

Keywords: city wellbeing , community wellbeing, engaging individuals and communities, measuring wellbeing and happiness

Procedia PDF Downloads 232
1999 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 89
1998 Contention Window Adjustment in IEEE 802.11-based Industrial Wireless Networks

Authors: Mohsen Maadani, Seyed Ahmad Motamedi

Abstract:

The use of wireless technology in industrial networks has gained vast attraction in recent years. In this paper, we have thoroughly analyzed the effect of contention window (CW) size on the performance of IEEE 802.11-based industrial wireless networks (IWN), from delay and reliability perspective. Results show that the default values of CWmin, CWmax, and retry limit (RL) are far from the optimum performance due to the industrial application characteristics, including short packet and noisy environment. An adaptive CW algorithm (payload-dependent) has been proposed to minimize the average delay. Finally a simple, but effective CW and RL setting has been proposed for industrial applications which outperforms the minimum-average-delay solution from maximum delay and jitter perspective, at the cost of a little higher average delay. Simulation results show an improvement of up to 20%, 25%, and 30% in average delay, maximum delay and jitter respectively.

Keywords: average delay, contention window, distributed coordination function (DCF), jitter, industrial wireless network (IWN), maximum delay, reliability, retry limit

Procedia PDF Downloads 394
1997 Psychological Testing in Industrial/Organizational Psychology: Validity and Reliability of Psychological Assessments in the Workplace

Authors: Melissa C. Monney

Abstract:

Psychological testing has been of interest to researchers for many years as useful tools in assessing and diagnosing various disorders as well as to assist in understanding human behavior. However, for over 20 years now, researchers and laypersons alike have been interested in using them for other purposes, such as determining factors in employee selection, promotion, and even termination. In recent years, psychological assessments have been useful in facilitating workplace decision processing, regarding employee circulation within organizations. This literature review explores four of the most commonly used psychological tests in workplace environments, namely cognitive ability, emotional intelligence, integrity, and personality tests, as organizations have used these tests to assess different factors of human behavior as predictive measures of future employee behaviors. The findings suggest that while there is much controversy and debate regarding the validity and reliability of these tests in workplace settings as they were not originally designed for these purposes, the use of such assessments in the workplace has been useful in decreasing costs and employee turnover as well as increase job satisfaction by ensuring the right employees are selected for their roles.

Keywords: cognitive ability, personality testing, predictive validity, workplace behavior

Procedia PDF Downloads 218
1996 Verification of the Supercavitation Phenomena: Investigation of the Cavity Parameters and Drag Coefficients for Different Types of Cavitator

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

Supercavitation is a pressure dependent process which gives opportunity to eliminate the wetted surface effects on the underwater vehicle due to the differences of viscosity and velocity effects between liquid (freestream) and gas phase. Cavitation process occurs depending on rapid pressure drop or temperature rising in liquid phase. In this paper, pressure based cavitation is investigated due to the fact that is encountered in the underwater world, generally. Basically, this vapor-filled pressure based cavities are unstable and harmful for any underwater vehicle because these cavities (bubbles or voids) lead to intense shock waves while collapsing. On the other hand, supercavitation is a desired and stabilized phenomena than general pressure based cavitation. Supercavitation phenomena offers the idea of minimizing form drag, and thus supercavitating vehicles are revived. When proper circumstances are set up, which are either increasing the operating speed of the underwater vehicle or decreasing the pressure difference between free stream and artificial pressure, the continuity of the supercavitation is obtainable. There are 2 types of supercavitation to obtain stable and continuous supercavitation, and these are called as natural and artificial supercavitation. In order to generate natural supercavitation, various mechanical structures are discovered, which are called as cavitators. In literature, a lot of cavitator types are studied either experimentally or numerically on a CFD platforms with intent to observe natural supercavitation since the 1900s. In this paper, firstly, experimental results are obtained, and trend lines are generated based on supercavitation parameters in terms of cavitation number (), form drag coefficientC_D, dimensionless cavity diameter (d_m/d_c), and length (L_c/d_c). After that, natural cavitation verification studies are carried out for disk and cone shape cavitators. In addition, supercavitation parameters are numerically analyzed at different operating conditions, and CFD results are fitted into trend lines of experimental results. The aims of this paper are to generate one generally accepted drag coefficient equation for disk and cone cavitators at different cavitator half angle and investigation of the supercavitation parameters with respect to cavitation number. Moreover, 165 CFD analysis are performed at different cavitation numbers on FLUENT version 21R2. Five different cavitator types are modeled on SCDM with respect tocavitator’s half angles. After that, CFD database is generated depending on numerical results, and new trend lines are generated based on supercavitation parameters. These trend lines are compared with experimental results. Finally, the generally accepted drag coefficient equation and equations of supercavitation parameters are generated.

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavitating flows, supercavitation parameters, drag reduction, viscous force elimination, natural cavitation verification

Procedia PDF Downloads 111
1995 The Attitude and Willingness to Use Telecare for Arthritis Patients

Authors: Jui-Chen Huang

Abstract:

Nowadays, the population is aging, the number of people who need to be taken care of is increased, but the manpower and funding are insufficient. Therefore, this study aims to explore the attitudes and willingness of arthritis patients to adopt telecare and to take a large medical institution in the central area of Taiwan as a sample hospital. A structured questionnaire (using the Likert five-point scale) was used to collect chronic patients over 20 years old as sample data, and a total of 500 valid questionnaires were effectively collected. The SPSS 18.0 statistical software was used for reliability analysis and independent sample t-test to explore the differences in attitudes and willingness to use telecare for arthritis patients and non-arthritic patients. The Cronbach's alpha value of this study questionnaire was above 0.94, showing good reliability. Arthritis patients and non-arthritic patients had statistically significant differences in attitudes toward telecare, while the willingness to use did not reach statistically significant differences. In addition, the average attitude and intention of arthritis patients for telecare are 3.38 and 3.41, respectively, indicating that arthritis patients have a certain degree of attitude and willingness to adopt telecare, which is worthy of follow-up research and practical industry push.

Keywords: telecare, arthritis patients, attitudes, intention

Procedia PDF Downloads 120
1994 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules

Authors: Michael Naderhirn, Marco Pavone

Abstract:

Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.

Keywords: formal system verification, reachability, real time controller, hybrid system

Procedia PDF Downloads 219
1993 Stochastic Analysis of Linux Operating System through Copula Distribution

Authors: Vijay Vir Singh

Abstract:

This work is focused studying the Linux operating system connected in a LAN (local area network). The STAR topology (to be called subsystem-1) and BUS topology (to be called subsystem-2) are taken into account, which are placed at two different locations and connected to a server through a hub. In the both topologies BUS topology and STAR topology, we have assumed n clients. The system has two types of failures i.e. partial failure and complete failure. Further, the partial failure has been categorized as minor and major partial failure. It is assumed that the minor partial failure degrades the sub-systems and the major partial failure make the subsystem break down mode. The system may completely fail due to failure of server hacking and blocking etc. The system is studied using supplementary variable technique and Laplace transform by using different types of failure and two types of repair. The various measures of reliability for example, availability of system, reliability of system, MTTF, profit function for different parametric values have been discussed.

Keywords: star topology, bus topology, blocking, hacking, Linux operating system, Gumbel-Hougaard family copula, supplementary variable

Procedia PDF Downloads 338
1992 An Investigation on Organisation Cyber Resilience

Authors: Arniyati Ahmad, Christopher Johnson, Timothy Storer

Abstract:

Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and critical information infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates organisation cyber resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called C-Suite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.

Keywords: critical information infrastructure, cyber resilience, organisation cyber resilience, reliability test

Procedia PDF Downloads 331
1991 Process of Analysis, Evaluation and Verification of the 'Real' Redevelopment of the Public Open Space at the Neighborhood’s Stairs: Case Study of Serres, Greece

Authors: Ioanna Skoufali

Abstract:

The present study is directed towards adaptation to climate change closely related to the phenomenon of the urban heat island (UHI). This issue is widespread and common to different urban realities, but particularly in Mediterranean cities that are characterized by dense urban. The attention of this work of redevelopment of the open space is focused on mitigation techniques aiming to solve local problems such as microclimatic parameters and the conditions of thermal comfort in summer, related to urban morphology. This quantitative analysis, evaluation, and verification survey involves the methodological elaboration applied in a real study case by Serres, through the experimental support of the ENVImet Pro V4.1 and BioMet software developed: i) in two phases concerning the anteoperam (phase a1 # 2013) and the post-operam (phase a2 # 2016); ii) in scenario A (+ 25% of green # 2017). The first study tends to identify the main intervention strategies, namely: the application of cool pavements, the increase of green surfaces, the creation of water surface and external fans; moreover, it obtains the minimum results achieved by the National Program 'Bioclimatic improvement project for public open space', EPPERAA (ESPA 2007-2013) related to the four environmental parameters illustrated below: the TAir = 1.5 o C, the TSurface = 6.5 o C, CDH = 30% and PET = 20%. In addition, the second study proposes a greater potential for improvement than postoperam intervention by increasing the vegetation within the district towards the SW/SE. The final objective of this in-depth design is to be transferable in homogeneous cases of urban regeneration processes with obvious effects on the efficiency of microclimatic mitigation and thermal comfort.

Keywords: cool pavements, microclimate parameters (TAir, Tsurface, Tmrt, CDH), mitigation strategies, outdoor thermal comfort (PET & UTCI)

Procedia PDF Downloads 173
1990 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 46
1989 Simplifying the Migration of Architectures in Embedded Applications Introducing a Pattern Language to Support the Workforce

Authors: Farha Lakhani, Michael J. Pont

Abstract:

There are two main architectures used to develop software for modern embedded systems: these can be labelled as “event-triggered” (ET) and “time-triggered” (TT). The research presented in this paper is concerned with the issues involved in migration between these two architectures. Although TT architectures are widely used in safety-critical applications they are less familiar to developers of mainstream embedded systems. The research presented in this paper began from the premise that–for a broad class of systems that have been implemented using an ET architecture–migration to a TT architecture would improve reliability. It may be tempting to assume that conversion between ET and TT designs will simply involve converting all event-handling software routines into periodic activities. However, the required changes to the software architecture are, in many cases rather more profound. The main contribution of the work presented in this paper is to identify ways in which the significant effort involved in migrating between existing ET architectures and “equivalent” (and effective) TT architectures could be reduced. The research described in this paper has taken an innovative step in this regard by introducing the use of ‘Design patterns’ for this purpose for the first time.

Keywords: embedded applications, software architectures, reliability, pattern

Procedia PDF Downloads 298
1988 On Reliability of a Credit Default Swap Contract during the EMU Debt Crisis

Authors: Petra Buzkova, Milos Kopa

Abstract:

Reliability of the credit default swap market had been questioned repeatedly during the EMU debt crisis. The article examines whether this development influenced sovereign EMU CDS prices in general. We regress the CDS market price on a model risk neutral CDS price obtained from an adopted reduced form valuation model in the 2009-2013 period. We look for a break point in the single-equation and multi-equation econometric models in order to show the changes in relations between CDS market and model prices. Our results differ according to the risk profile of a country. We find that in the case of riskier countries, the relationship between the market and model price changed when market participants started to question the ability of CDS contracts to protect their buyers. Specifically, it weakened after the change. In the case of less risky countries, the change happened earlier and the effect of a weakened relationship is not observed.

Keywords: chow stability test, credit default swap, debt crisis, reduced form valuation model, seemingly unrelated regression

Procedia PDF Downloads 235
1987 Structure Design of Vacuum Vessel with Large Openings for Spacecraft Thermal Vacuum Test

Authors: Han Xiao, Ruan Qi, Zhang Lei, Qi Yan

Abstract:

Space environment simulator is a facility used to conduct thermal test for spacecraft, and vacuum vessel is the main body of it. According to the requirements for thermal tests of the spacecraft and its solar array panels, the primary vessel and the side vessels are designed to be a combinative structure connected with aperture, which ratio reaches 0.7. Since the vacuum vessel suffers 0.1MPa external pressure during the process of thermal test, in order to ensure the simulator’s reliability and safety, it’s necessary to calculate the vacuum vessel’s intensity and stability. Based on the impact of large openings to vacuum vessel structure, this paper explored the reinforce design and analytical way of vacuum vessel with large openings, using a large space environment simulator’s vacuum vessel design as an example. Tests showed that the reinforce structure is effective to fulfill the requirements of external pressure and the gravity. This ensured the reliability of the space environment simulator, providing a guarantee for developing the spacecraft.

Keywords: vacuum vessel, large opening, space environment simulator, structure design

Procedia PDF Downloads 495
1986 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence

Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar

Abstract:

This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.

Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves

Procedia PDF Downloads 171
1985 Validating the Theme Park Service Quality Scale: A Case Study of Zhuhai Chimelong Ocean Kingdom

Authors: Kat Jingjing Luo

Abstract:

The development of theme parks in China has been through a rapid growth in the past decades. Increasing competition within service quality has forced theme park managers concerned the relationship between service quality and visitors’ satisfaction. Even though those existing service quality measurements such as SERVQUAL and THEMEQUAL have been applied in related researches, none of them is exclusive for Chinese theme park service quality. This study aims to investigate the service quality of the most popular theme park in China currently and develop a unique, reliable and valid scale. The reliability and validity analysis results from a survey of over 200 tourists in Chimelong ocean kingdom in Zhuhai city, south of China, indicate that the dimension of waiting time is a discover factor in the measurement of Chinese theme park service quality excluding in the THEMEQUAL instrument (i.e., tangibles, reliability, responsiveness and access, assurance, empathy and courtesy). The newly developed scale gives a better understand service quality in Chinese theme park industry, and the managerial implications in regard to the research, how to improve theme park service quality are discussed.

Keywords: theme park, scale development, China, service quality

Procedia PDF Downloads 251
1984 Subsea Control Module (SCM) - A Vital Factor for Well Integrity and Production Performance in Deep Water Oil and Gas Fields

Authors: Okoro Ikechukwu Ralph, Fuat Kara

Abstract:

The discoveries of hydrocarbon reserves has clearly drifted offshore, and in deeper waters - areas where the industry still has limited knowledge; and that were hitherto, regarded as being out of reach. This shift presents significant and increased challenges in technology requirements needed to guarantee safety of personnel, environment and equipment; ensure high reliability of installed equipment; and provide high level of confidence in security of investment and company reputation. Nowhere are these challenges more apparent than on subsea well integrity and production performance. The past two decades has witnessed enormous rise in deep and ultra-deep water offshore field developments for the recovery of hydrocarbons. Subsea installed equipment at the seabed has been the technology of choice for these developments. This paper discusses the role of Subsea Control module (SCM) as a vital factor for deep-water well integrity and production performance. A case study for Deep-water well integrity and production performance is analysed.

Keywords: offshore reliability, production performance, subsea control module, well integrity

Procedia PDF Downloads 484
1983 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 260
1982 Comparative Isotherms Studies on Adsorptive Removal of Methyl Orange from Wastewater by Watermelon Rinds and Neem-Tree Leaves

Authors: Sadiq Sani, Muhammad B. Ibrahim

Abstract:

Watermelon rinds powder (WRP) and neem-tree leaves powder (NLP) were used as adsorbents for equilibrium adsorption isotherms studies for detoxification of methyl orange dye (MO) from simulated wastewater. The applicability of the process to various isotherm models was tested. All isotherms from the experimental data showed excellent linear reliability (R2: 0.9487-0.9992) but adsorptions onto WRP were more reliable (R2: 0.9724-0.9992) than onto NLP (R2: 0.9487-0.9989) except for Temkin’s Isotherm where reliability was better onto NLP (R2: 0.9937) than onto WRP (R2: 0.9935). Dubinin-Radushkevich’s monolayer adsorption capacities for both WRP and NLP (qD: 20.72 mg/g, 23.09 mg/g) were better than Langmuir’s (qm: 18.62 mg/g, 21.23 mg/g) with both capacities higher for adsorption onto NLP (qD: 23.09 mg/g; qm: 21.23 mg/g) than onto WRP (qD: 20.72 mg/g; qm: 18.62 mg/g). While values for Langmuir’s separation factor (RL) for both adsorbents suggested unfavourable adsorption processes (RL: -0.0461, -0.0250), Freundlich constant (nF) indicated favourable process onto both WRP (nF: 3.78) and NLP (nF: 5.47). Adsorption onto NLP had higher Dubinin-Radushkevich’s mean free energy of adsorption (E: 0.13 kJ/mol) than WRP (E: 0.08 kJ/mol) and Temkin’s heat of adsorption (bT) was better onto NLP (bT: -0.54 kJ/mol) than onto WRP (bT: -0.95 kJ/mol) all of which suggested physical adsorption.

Keywords: adsorption isotherms, methyl orange, neem leaves, watermelon rinds

Procedia PDF Downloads 245