Search results for: reliability
1790 Bayes Estimation of Parameters of Binomial Type Rayleigh Class Software Reliability Growth Model using Non-informative Priors
Authors: Rajesh Singh, Kailash Kale
Abstract:
In this paper, the Binomial process type occurrence of software failures is considered and failure intensity has been characterized by one parameter Rayleigh class Software Reliability Growth Model (SRGM). The proposed SRGM is mathematical function of parameters namely; total number of failures i.e. η-0 and scale parameter i.e. η-1. It is assumed that very little or no information is available about both these parameters and then considering non-informative priors for both these parameters, the Bayes estimators for the parameters η-0 and η-1 have been obtained under square error loss function. The proposed Bayes estimators are compared with their corresponding maximum likelihood estimators on the basis of risk efficiencies obtained by Monte Carlo simulation technique. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.Keywords: binomial process, non-informative prior, maximum likelihood estimator (MLE), rayleigh class, software reliability growth model (SRGM)
Procedia PDF Downloads 3881789 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data
Authors: Xiang Jia, Zhijun Cheng
Abstract:
The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution
Procedia PDF Downloads 1421788 Accelerated Evaluation of Structural Reliability under Tsunami Loading
Authors: Sai Hung Cheung, Zhe Shao
Abstract:
It is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis in view of recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 which brought huge losses of lives and properties. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of a recently proposed moving least squares response surface approach for stochastic sampling and the Subset Simulation algorithm is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.Keywords: response surface, stochastic simulation, structural reliability tsunami, risk
Procedia PDF Downloads 6751787 Fault-Tolerant Configuration for T-Type Nested Neutral Point Clamped Converter
Authors: S. Masoud Barakati, Mohsen Rahmani Haredasht
Abstract:
Recently, the use of T-type nested neutral point clamped (T-NNPC) converter has increased in medium voltage applications. However, the T-NNPC converter architecture's reliability and continuous operation are at risk by including semiconductor switches. Semiconductor switches are a prone option for open-circuit faults. As a result, fault-tolerant converters are required to improve the system's reliability and continuous functioning. This study's primary goal is to provide a fault-tolerant T-NNPC converter configuration. In the proposed design utilizing the cold reservation approach, a redundant phase is considered, which replaces the faulty phase once the fault is diagnosed in each phase. The suggested fault-tolerant configuration can be easily implemented in practical applications due to the use of a simple PWM control mechanism. The performance evaluation of the proposed configuration under different scenarios in the MATLAB-Simulink environment proves its efficiency.Keywords: T-type nested neutral point clamped converter, reliability, continuous operation, open-circuit faults, fault-tolerant converters
Procedia PDF Downloads 1201786 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 871785 Adapting Depression and Anxiety Questionnaire for Children into Turkish: Reliability and Validity Studies
Authors: İsmail Seçer
Abstract:
Although depression and anxiety disorders are considered to be adult disorders, the evidence obtained from several studies conducted recently shows that the roots of depression and anxiety disorders go back to childhood years. Thus, it is thought that analyzing depressive symptoms and anxiety disorders observed in the childhood is an important necessity. In the direction of the problem status of the study, the purpose of this study is to adapt anxiety and depression questionnaire for children into Turkish culture and analyze the psychometric characteristics of it on clinical and nonclinical samples separately. The study is a descriptive survey research. The study was conducted on two different sample groups, clinical and nonclinical. The clinical sample is formed of 205 individuals and the nonclinical sample is formed of 630 individuals. Through the study, anxiety and depression questionnaire for children, anxiety sensitivity index and obsessive compulsive disorder questionnaire for children were used. Experts’ opinions were asked to provide language validity of the scale. Confirmatory factor analysis and criterion-related validity to analyze construct validity and internal consistency and split-half reliability analyses were done for reliability. In the direction of experts’ opinions, construct validity of the scale was analyzed with simple confirmatory factor analysis and it was determined that the model fit of the two-factor structure of the scale gives good fit on both the clinical and nonclinical samples after determining that the language validity of the scale is provided. In criterion-related validity, it was determined that there are positive and significant relations between anxiety and depression questionnaire for children and anxiety sensitivity and obsessive compulsive disorder. The results of internal consistency and half-split reliability analyses also show that the scale has adequate reliability value. It can be said that depression and anxiety questionnaire for children which was adapted to determine depressive symptoms and anxiety disorders observed in childhood has adequate reliability and validity values and it can be used in future studies. It can be recommended that the psychometric characteristics of the scale can be analyzed and reported on new samples in the future studies.Keywords: scale adapting, construct validity, confirmatory factor analysis, childhood depression
Procedia PDF Downloads 3341784 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability
Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto
Abstract:
Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT
Procedia PDF Downloads 7941783 The Effect Study of Meditation Music in the Elderly
Authors: Metee Pigultong
Abstract:
The research aims at 1) composition of meditation music, 2) study of the meditation time reliability. The population is the older adults who meditated practitioners in the Thepnimitra Temple, Don Mueang District, Bangkok. The sample group was the older persons who meditated practitioners from the age of 60 with five volunteers. The research methodology was time-series to conduct the research progression. The research instruments included: 1) meditation music, 2) brain wave recording form. The research results found that 1) the music combines the binaural beats suitable for the meditation of the older persons, consisting of the following features: a) The tempo rate of the meditation music is no more than 60 beats per minute. b) The musical instruments for the meditation music arrangement include only 4-5 pieces. c) The meditation music arrangement needs to consider the nature of the right instrument. d) Digital music instruments are suitable for composition. e) The pure-tone sound combined in music must generate a brain frequency at the level of 10 Hz. 2) After the researcher conducted a 3-weeks brain training procedure, the researcher performed three tests for the reliability level using Cronbach's Alpha method. The result showed that the meditation reliability had the level = .475 as a moderate concentration.Keywords: binaural beats, music therapy, meditation, older person, the Buddhist meditated practitioners
Procedia PDF Downloads 1911782 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia
Authors: Anik Sarminingsih, Krishna V. Pradana
Abstract:
The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river
Procedia PDF Downloads 2941781 Investigation into Micro-Grids with Renewable Energy Sources for Use as High Reliability Electrical Power Supply in a Nuclear Facility
Authors: Gerard R. Lekhema, Willie A Cronje, Ian Korir
Abstract:
The objective of this research work is to investigate the use of a micro-grid system to improve the reliability and availability of emergency electrical power in a nuclear facility. The nuclear facility is a safety-critical application that requires reliable electrical power for safe startup, operation and normal or emergency shutdown conditions. The majority of the nuclear facilities around the world utilize diesel generators as emergency power supply during loss of offsite power events. This study proposes the micro-grid system with distributed energy sources and energy storage systems for use as emergency power supply. The systems analyzed include renewable energy sources, decay heat recovery system and large scale energy storage system. The configuration of the micro-grid system is realized with guidelines of nuclear safety standards and requirements. The investigation results presented include performance analysis of the micro-grid system in terms of reliability and availability.Keywords: emergency power supply, micro-grid, nuclear facility, renewable energy sources
Procedia PDF Downloads 3941780 Residual Life Estimation of K-out-of-N Cold Standby System
Authors: Qian Zhao, Shi-Qi Liu, Bo Guo, Zhi-Jun Cheng, Xiao-Yue Wu
Abstract:
Cold standby redundancy is considered to be an effective mechanism for improving system reliability and is widely used in industrial engineering. However, because of the complexity of the reliability structure, there is little literature studying on the residual life of cold standby system consisting of complex components. In this paper, a simulation method is presented to predict the residual life of k-out-of-n cold standby system. In practical cases, failure information of a system is either unknown, partly unknown or completely known. Our proposed method is designed to deal with the three scenarios, respectively. Differences between the procedures are analyzed. Finally, numerical examples are used to validate the proposed simulation method.Keywords: cold standby system, k-out-of-n, residual life, simulation sampling
Procedia PDF Downloads 4011779 Bayesian Reliability of Weibull Regression with Type-I Censored Data
Authors: Al Omari Moahmmed Ahmed
Abstract:
In the Bayesian, we developed an approach by using non-informative prior with covariate and obtained by using Gauss quadrature method to estimate the parameters of the covariate and reliability function of the Weibull regression distribution with Type-I censored data. The maximum likelihood seen that the estimators obtained are not available in closed forms, although they can be solved it by using Newton-Raphson methods. The comparison criteria are the MSE and the performance of these estimates are assessed using simulation considering various sample size, several specific values of shape parameter. The results show that Bayesian with non-informative prior is better than Maximum Likelihood Estimator.Keywords: non-informative prior, Bayesian method, type-I censoring, Gauss quardature
Procedia PDF Downloads 5031778 Towards Reliable Mobile Cloud Computing
Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy
Abstract:
Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack
Procedia PDF Downloads 3971777 The Relationship between Conceptual Organizational Culture and the Level of Tolerance in Employees
Authors: M. Sadoughi, R. Ehsani
Abstract:
The aim of the present study is examining the relationship between conceptual organizational culture and the level of tolerance in employees of Islamic Azad University of Shahre Ghods. This research is a correlational and analytic-descriptive one. The samples included 144 individuals. A 24-item standard questionnaire of organizational culture by Cameron and Queen was used in this study. This questionnaire has six criteria and each criterion includes four items that each item indicates one cultural dimension. Reliability coefficient of this questionnaire was normed using Cronbach's alpha of 0.91. Also, the 25-item questionnaire of tolerance by Conor and Davidson was used. This questionnaire is in a five-degree Likert scale form. It has seven criteria and is designed to measure the power of coping with pressure and threat. It has the needed content reliability and its reliability coefficient is normed using Cronbach's alpha of 0.87. Data were analyzed using Pearson correlation coefficient and multivariable regression. The results showed among various dimensions of organizational culture, there is a positive significant relationship between three dimensions (family, adhocracy, bureaucracy) and tolerance, there is a negative significant relationship between dimension of market and tolerance and components of organizational culture have the power of prediction and explaining the tolerance. In this explanation, the component of family is the most effective and the best predictor of tolerance.Keywords: adhocracy, bureaucracy, organizational culture, tolerance
Procedia PDF Downloads 4491776 Reliability Enhancement by Parameter Design in Ferrite Magnet Process
Abstract:
Ferrite magnet is widely used in many automotive components such as motors and alternators. Magnets used inside the components must be in good quality to ensure the high level of performance. The purpose of this study is to design input parameters that optimize the ferrite magnet production process to ensure the quality and reliability of manufactured products. Design of Experiments (DOE) and Statistical Process Control (SPC) are used as mutual supplementations to optimize the process. DOE and SPC are quality tools being used in the industry to monitor and improve the manufacturing process condition. These tools are practically used to maintain the process on target and within the limits of natural variation. A mixed Taguchi method is utilized for optimization purpose as a part of DOE analysis. SPC with proportion data is applied to assess the output parameters to determine the optimal operating conditions. An example of case involving the monitoring and optimization of ferrite magnet process was presented to demonstrate the effectiveness of this approach. Through the utilization of these tools, reliable magnets can be produced by following the step by step procedures of proposed framework. One of the main contributions of this study was producing the crack free magnets by applying the proposed parameter design.Keywords: ferrite magnet, crack, reliability, process optimization, Taguchi method
Procedia PDF Downloads 5171775 An Exploratory Study of Reliability of Ranking vs. Rating in Peer Assessment
Authors: Yang Song, Yifan Guo, Edward F. Gehringer
Abstract:
Fifty years of research has found great potential for peer assessment as a pedagogical approach. With peer assessment, not only do students receive more copious assessments; they also learn to become assessors. In recent decades, more educational peer assessments have been facilitated by online systems. Those online systems are designed differently to suit different class settings and student groups, but they basically fall into two categories: rating-based and ranking-based. The rating-based systems ask assessors to rate the artifacts one by one following some review rubrics. The ranking-based systems allow assessors to review a set of artifacts and give a rank for each of them. Though there are different systems and a large number of users of each category, there is no comprehensive comparison on which design leads to higher reliability. In this paper, we designed algorithms to evaluate assessors' reliabilities based on their rating/ranking against the global ranks of the artifacts they have reviewed. These algorithms are suitable for data from both rating-based and ranking-based peer assessment systems. The experiments were done based on more than 15,000 peer assessments from multiple peer assessment systems. We found that the assessors in ranking-based peer assessments are at least 10% more reliable than the assessors in rating-based peer assessments. Further analysis also demonstrated that the assessors in ranking-based assessments tend to assess the more differentiable artifacts correctly, but there is no such pattern for rating-based assessors.Keywords: peer assessment, peer rating, peer ranking, reliability
Procedia PDF Downloads 4371774 A Study of Environmental Test Sequences for Electrical Units
Authors: Jung Ho Yang, Yong Soo Kim
Abstract:
Electrical units are operated by electrical and electronic components. An environmental test sequence is useful for testing electrical units to reduce reliability issues. This study introduces test sequence guidelines based on relevant principles and considerations for electronic testing according to international standard IEC-60068-1 and the United States military standard MIL-STD-810G. Then, test sequences were proposed based on the descriptions for each test. Finally, General Motors (GM) specification GMW3172 was interpreted and compared to IEC-60068-1 and MIL-STD-810G.Keywords: reliability, environmental test sequence, electrical units, IEC 60068-1, MIL-STD-810G
Procedia PDF Downloads 5041773 Modeling the Reliability of a Fuel Cell and the Influence of Mechanical Aspects on the Production of Electrical Energy
Authors: Raed Kouta
Abstract:
A fuel cell is a multi-physical system. Its electrical performance depends on chemical, electrochemical, fluid, and mechanical parameters. Many studies focus on physical and chemical aspects. Our study contributes to the evaluation of the influence of mechanical aspects on the performance of a fuel cell. This study is carried out as part of a reliability approach. Reliability modeling allows to consider the uncertainties of the incoming parameters and the probabilistic modeling of the outgoing parameters. The fuel cell studied is the one often used in land, sea, or air transport. This is the Low-Temperature Proton Exchange Membrane Fuel Cell (PEMFC). This battery can provide the required power level. One of the main scientific and technical challenges in mastering the design and production of a fuel cell is to know its behavior in its actual operating environment. The study proposes to highlight the influence on the production of electrical energy: Mechanical design and manufacturing parameters and their uncertainties (Young module, GDL porosity, permeability, etc.). The influence of the geometry of the bipolar plates is also considered. An experimental design is proposed with two types of materials as well as three geometric shapes for three joining pressures. Other experimental designs are also proposed for studying the influence of uncertainties of mechanical parameters on cell performance. - Mechanical (static, dynamic) and thermal (tightening - compression, vibrations (road rolling and tests on vibration-climatic bench, etc.) loads. This study is also carried out according to an experimental scheme on a fuel cell system for vibration loads recorded on a vehicle test track with three temperatures and three expected performance levels. The work will improve the coupling between mechanical, physical, and chemical phenomena.Keywords: fuel cell, mechanic, reliability, uncertainties
Procedia PDF Downloads 1881772 Investigating the Behavior of Water Shortage Indices for Performance Evaluation of a Water Resources System
Authors: Frederick N. F. Chou, Nguyen Thi Thuy Linh
Abstract:
The impact of water shortages has been increasingly severe as a consequence of population growth, urbanization, economic development, and climate change. The need for improvements in reliable water supply systems is urgent with the increasing living standards of regions. In this study, a suitable shortage index capable of multi-aspect description - frequency, magnitude, and duration - is adopted to more accurately describe the characteristics of a shortage situation. The values of the index were determined to cope with the increasing need for reliability. There are four reservoirs in series located on the Be River of the Dong Nai River Basin in Southern Vietnam. The primary purpose of the three upstream reservoirs is hydropower generation while the primary purpose of the fourth is water supply. A compromise between hydropower generation and water supply can be negotiated for these four reservoirs to reduce the severity of water shortages. A generalized water allocation model was applied to simulate the water supply, and hydropower generation of various management alternatives and the system’s reliability was evaluated using the adopted multiple shortage indices. Modifying management policies of water resources using data-based indexes can improve the reliability of water supply.Keywords: cascade reservoirs, hydropower, shortage index, water supply
Procedia PDF Downloads 2691771 Translation and Validation of the Pediatric Quality of Life Inventory for Children in Pakistani Context
Authors: Nazia Mustafa, Aneela Maqsood
Abstract:
Pediatric Quality of Life Inventory is the most widely used instrument for assessing children and adolescent health-related quality of life and has shown excellent markers of reliability and validity. The current study was carried out with the objectives of translation and cross-language validation along with the determination of factor Structure and psychometric properties of the Urdu version. It was administered on 154 Primary School Children with age range 10 to12 years (M= 10.86, S.D = 0.62); including boys (n=92) and girls (n = 62). The sample was recruited from two randomly selected schools from the Rawalpindi district of Pakistan. Results of the pilot phase revealed that the instrument had good reliability (Urdu Version α = 0.798; English Version α = 0.795) as well as test-retest correlation coefficients over a period of 15 days (r = 0.85). Exploratory factor analysis (EFA) resulted in three factorial structures; Social/School Functioning (k = 8), Psychological Functioning (k = 7) and Physical Functioning (k = 6) considered suitable for our sample instead of four factors. Bartlett's test of sphericity showed inter-correlation between variables. However, factor loadings for items 22 and 23 of the School Functioning subscale were problematic. The model was fit to the data after their removal with Cronbach’s Alpha Reliability coefficient of the scale (k = 21) as 0.87 and for subscales as 0.75, 0.77 and 0.73 for Social/School Scale, Psychological subscale and Physical subscale, respectively. These results supported the feasibility and reliability of the Urdu version of the Pediatric Quality of Life Inventory as a reliable and effective tool for the measurement of quality of life among Pediatrics Pakistani population.Keywords: primary school children, paediatric quality of life, exploratory factor analysis, Pakistan
Procedia PDF Downloads 1341770 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 1411769 Evaluation of Spatial Correlation Length and Karhunen-Loeve Expansion Terms for Predicting Reliability Level of Long-Term Settlement in Soft Soils
Authors: Mehrnaz Alibeikloo, Hadi Khabbaz, Behzad Fatahi
Abstract:
The spectral random field method is one of the widely used methods to obtain more reliable and accurate results in geotechnical problems involving material variability. Karhunen-Loeve (K-L) expansion method was applied to perform random field discretization of cross-correlated creep parameters. Karhunen-Loeve expansion method is based on eigenfunctions and eigenvalues of covariance function adopting Kernel integral solution. In this paper, the accuracy of Karhunen-Loeve expansion was investigated to predict long-term settlement of soft soils adopting elastic visco-plastic creep model. For this purpose, a parametric study was carried to evaluate the effect of K-L expansion terms and spatial correlation length on the reliability of results. The results indicate that small values of spatial correlation length require more K-L expansion terms. Moreover, by increasing spatial correlation length, the coefficient of variation (COV) of creep settlement increases, confirming more conservative and safer prediction.Keywords: Karhunen-Loeve expansion, long-term settlement, reliability analysis, spatial correlation length
Procedia PDF Downloads 1591768 Radial Distribution Network Reliability Improvement by Using Imperialist Competitive Algorithm
Authors: Azim Khodadadi, Sahar Sadaat Vakili, Ebrahim Babaei
Abstract:
This study presents a numerical method to optimize the failure rate and repair time of a typical radial distribution system. Failure rate and repair time are effective parameters in customer and energy based indices of reliability. Decrease of these parameters improves reliability indices. Thus, system stability will be boost. The penalty functions indirectly reflect the cost of investment which spent to improve these indices. Constraints on customer and energy based indices, i.e. SAIFI, SAIDI, CAIDI and AENS have been considered by using a new method which reduces optimization algorithm controlling parameters. Imperialist Competitive Algorithm (ICA) used as main optimization technique and particle swarm optimization (PSO), simulated annealing (SA) and differential evolution (DE) has been applied for further investigation. These algorithms have been implemented on a test system by MATLAB. Obtained results have been compared with each other. The optimized values of repair time and failure rate are much lower than current values which this achievement reduced investment cost and also ICA gives better answer than the other used algorithms.Keywords: imperialist competitive algorithm, failure rate, repair time, radial distribution network
Procedia PDF Downloads 6681767 Statistical Characteristics of Code Formula for Design of Concrete Structures
Authors: Inyeol Paik, Ah-Ryang Kim
Abstract:
In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property
Procedia PDF Downloads 3191766 The Research of Reliability of MEMS Device under Thermal Shock Test in Space Mission
Authors: Liu Ziyu, Gao Yongfeng, Li Muhua, Zhao Jiahao, Meng Song
Abstract:
The effect of thermal shock on the operation of micro electromechanical systems (MEMS) were examined. All MEMS device were tested before and after three different conditions of thermal shock (from -55℃ to 85℃, from -65℃ to 125℃, from -65℃ to 200℃). The micro lens showed no changes after thermal shock, which shows that the design of the micro lens can be well adapted to the application environment in the space. The design of the micro mirror can be well adapted to the space application environment. The micro-magnetometer, RF MEMS switch and the micro accelerometer exhibited degradation and parameter drift after thermal shock, potential mechanical was proposed.Keywords: MEMS, thermal shock test, reliability, space environment
Procedia PDF Downloads 5901765 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 651764 Web 2.0 Enabling Knowledge-Sharing Practices among Students of IIUM: An Exploration of the Determinants
Authors: Shuaibu Hassan Usman, Ishaq Oyebisi Oyefolahan
Abstract:
This study was aimed to explore the latent factors in the web 2.0 enabled knowledge sharing practices instrument. Seven latent factors were identified through a factor analysis with orthogonal rotation and interpreted based on simple structure convergence, item loadings, and analytical statistics. The number of factors retains was based on the analysis of Kaiser Normalization criteria and Scree plot. The reliability tests revealed a satisfactory reliability scores on each of the seven latent factors of the web 2.0 enabled knowledge sharing practices. Limitation, conclusion, and future work of this study were also discussed.Keywords: factor analysis, latent factors, knowledge sharing practices, students, web 2.0 enabled
Procedia PDF Downloads 4341763 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 581762 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk
Authors: Ariful Islam, Showkat Ahmad Lone
Abstract:
Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study
Procedia PDF Downloads 3471761 Understanding ASPECTS of Stroke: Interrater Reliability between Emergency Medicine Physician and Radiologist in a Rural Setup
Authors: Vineel Inampudi, Arjun Prakash, Joseph Vinod
Abstract:
Aims and Objectives: To evaluate the interrater reliability in grading ASPECTS score, between emergency medicine physician at first contact and radiologist among patients with acute ischemic stroke. Materials and Methods: We conducted a retrospective analysis of 86 acute ischemic stroke cases referred to the Department of Radiodiagnosis during November 2014 to January 2016. The imaging (plain CT scan) was performed using GE Bright Speed Elite 16 Slice CT Scanner. ASPECTS score was calculated separately by an emergency medicine physician and radiologist. Interrater reliability for total and dichotomized ASPECTS (≥ 6 and < 6) scores were assessed using statistical analysis (ICC and Cohen ĸ coefficients) on SPSS software (v17.0). Results: Interrater agreement for total and dichotomized ASPECTS was substantial (ICC 0.79 and Cohen ĸ 0.68) between the emergency physician and radiologist. Mean difference in ASPECTS between the two readers was only 0.15 with standard deviation of 1.58. No proportionality bias was detected. Bland Altman plot was constructed to demonstrate the distribution of ASPECT differences between the two readers. Conclusion: Substantial interrater agreement was noted in grading ASPECTS between emergency medicine physician at first contact and radiologist thereby confirming its robustness even in a rural setting.Keywords: ASPECTS, computed tomography, MCA territory, stroke
Procedia PDF Downloads 236