Search results for: solder joint reliability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2941

Search results for: solder joint reliability

2371 Reliability of Self-Reported Language Proficiency Measures in l1 Attrition Research: A Closer Look at the Can-Do-Scales.

Authors: Anastasia Sorokina

Abstract:

Self-reported language proficiency measures have been widely used by researchers and have been proven to be an accurate tool to assess actual language proficiency. L1 attrition researchers also rely on self-reported measures. More specifically, can-do-scales has gained popularity in the discipline of L1 attrition research. The can-do-scales usually contain statements about language (e.g., “I can write e-mails”); participants are asked to rate each statement on a scale from 1 (I cannot do it at all) to 5 (I can do it without any difficulties). Despite its popularity, no studies have examined can-do-scales’ reliability at measuring the actual level of L1 attrition. Do can-do-scales positively correlate with lexical diversity, syntactic complexity, and fluency? The present study analyzed speech samples of 35 Russian-English attriters to examine whether their self-reported proficiency correlates with their actual L1 proficiency. The results of Pearson correlation demonstrated that can-do-scales correlated with lexical diversity, syntactic complexity, and fluency. These findings provide a valuable contribution to the L1 attrition research by demonstrating that can-do-scales can be used as a reliable tool to measure L1 attrition.

Keywords: L1 attrition, can-do-scales, lexical diversity, syntactic complexity

Procedia PDF Downloads 214
2370 Development and Validation of Employee Trust Scale: Factor Structure, Reliability and Validity

Authors: Chua Bee Seok, Getrude Cosmas, Jasmine Adela Mutang, Shazia Iqbal Hashmi

Abstract:

The aims of this study were to determine the factor structure and psychometric properties (i.e., reliability and convergent validity) of the employees trust scale, a newly created instrument by the researchers. The employees trust scale initially contained 82 items to measure employee’s trust toward their supervisors. A sample of 818 (343 females, 449 males) employees were selected randomly from public and private organization sectors in Kota Kinabalu, Sabah, Malaysia. Their ages ranged from 19 to 67 years old with the mean of 34.55 years old. Their average tenure with their current employer was 11.2 years (s.d. = 7.5 years). The respondents were asked to complete the employees trust scale, as well as a managerial trust questionnaire from Mishra. The exploratory factor analysis on employee’s trust toward their supervisor’s extracted three factors, labeled 'trustworthiness' (32 items), 'position status' (11 items) and 'relationship' (6 items) which accounted for 62.49% of the total variance. Trustworthiness factors were re-categorized into three sub factors: competency (11 items), benevolence (8 items) and integrity (13 items). All factors and sub factors of the scales demonstrated clear reliability with internal consistency of Cronbach’s Alpha above 0.85. The convergent validity of the Scale was supported by an expected pattern of correlations (positive and significant correlation) between the score of all factors and sub factors of the scale and the score on the managerial trust questionnaire which measured the same construct. The convergent validity of employees trust scale was further supported by the significant and positive inter correlation between the factors and sub factors of the scale. The results suggest that the employees trust scale is a reliable and valid measure. However, further studies need to be carried out in other groups of sample as to further validate the Scale.

Keywords: employees trust scale, psychometric properties, trustworthiness, position status, relationship

Procedia PDF Downloads 441
2369 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 152
2368 An Assessment of the Performance of Local Government in Ondo State Nigeria: A Capital Budgeting Approach

Authors: Olurankinse Felix

Abstract:

Local governments in Ondo State Nigeria are the third tier of government saddled with the responsibility of providing governance and economic services at the grassroots. To be able to do this, the Constitution of the Federal Republic of Nigeria provided that a proportion of Federation Account be allocated to them in addition to their internally generated revenue. From the allocation and other incidental sources of revenue, the local governments are expected to provide basic infrastructures and other social amenities to better the lots of the rural dwellers. Nevertheless, local governments’ performances in terms of provision of social amenities are without questioning and quite not encouraging. Assessing the performance of local governments in this period of dearth and scarcity of resources is highly indispensable more so that the activities of local governments’ staff are bedeviled and characterized with fraud, corruption and mismanagement. Considering the direct impact of the consequences of their action on the living standard of the rural dwellers therefore calls for the need to evaluate their level of performances using capital budgeting approach. The paper being a time series study adopts the survey design. Data were obtained through secondary source mainly from the Annual financial statements and publication of approved budgets estimates covering the period of study (2008-2012). The use of ratio analysis was employed in analyzing the comparative level of performances of the local governments under study. The result of the study shows that less than 30% of the local governments were able to harness the budgetary allocation to provide amenities to the beneficiaries while majority of the local governments were involved in unethical conduct ranging from theft of fund, corruption, diversion of funds and extra-budgetary activities. Also, there is poor internally generated revenue to complement the statutory allocation and besides, the monthly withholding of larger portions of local government share by the state in the name of joint account were also seen as contributory factors. The study recommends the need for transparency and accountability in public fund management through the oversight function of the state house of assembly. Also local government should be made to be autonomous and independent of the state by jettisoning the idea of joint account.

Keywords: performance, transparency and accountability, capital budgeting, joint account, local government autonomy

Procedia PDF Downloads 311
2367 Analysis of Fault Tolerance on Grid Computing in Real Time Approach

Authors: Parampal Kaur, Deepak Aggarwal

Abstract:

In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.

Keywords: computational grid, fault tolerance, task replication, job scheduling

Procedia PDF Downloads 417
2366 A Study on the Accelerated Life Cycle Test Method of the Motor for Home Appliances by Using Acceleration Factor

Authors: Youn-Sung Kim, Mi-Sung Kim, Jae-Kun Lee

Abstract:

This paper deals with the accelerated life cycle test method of the motor for home appliances that demand high reliability. Life Cycle of parts in home appliances also should be 10 years because life cycle of the home appliances such as washing machine, refrigerator, TV is at least 10 years. In case of washing machine, the life cycle test method of motor is advanced for 3000 cycle test (1cycle = 2hours). However, 3000 cycle test incurs loss for the time and cost. Objectives of this study are to reduce the life cycle test time and the number of test samples, which could be realized by using acceleration factor for the test time and reduction factor for the number of sample.

Keywords: accelerated life cycle test, motor reliability test, motor for washing machine, BLDC motor

Procedia PDF Downloads 611
2365 Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis

Authors: Chang-Jen Lan

Abstract:

Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as X

Keywords: reliability analysis, level of service, intersection critical degree of saturation, risk based index

Procedia PDF Downloads 116
2364 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition

Authors: H. Mousavi, M. Sharifi, H. Pourvaziri

Abstract:

Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.

Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation

Procedia PDF Downloads 396
2363 Progress in Accuracy, Reliability and Safety in Firedamp Detection

Authors: José Luis Lorenzo Bayona, Ljiljana Medic-Pejic, Isabel Amez Arenillas, Blanca Castells Somoza

Abstract:

The communication presents the study results carried out by the Official Laboratory J. M. Madariaga (LOM) of the Polytechnic University of Madrid to analyze the reliability of methane detection systems used in underground mining. Poor firedamp control in work can cause from production stoppages to fatal accidents and since there is currently a great variety of equipment with different functional characteristics, a study is needed to indicate which measurement principles have the highest degree of confidence. For the development of the project, a series of fixed, transportable and portable methane detectors with different measurement principles have been selected to subject them to laboratory tests following the methods described in the applicable regulations. The test equipment has been the one usually used in the certification and calibration of these devices, subject to the LOM quality system, and the tests have been carried out on detectors accessible in the market. The conclusions establish the main advantages and disadvantages of the equipment according to the measurement principle used; catalytic combustion, interferometry and infrared absorption.

Keywords: ATEX standards, gas detector, methane meter, mining safety

Procedia PDF Downloads 118
2362 The Happiness Pulse: A Measure of Individual Wellbeing at a City Scale, Development and Validation

Authors: Rosemary Hiscock, Clive Sabel, David Manley, Sam Wren-Lewis

Abstract:

As part of the Happy City Index Project, Happy City have developed a survey instrument to measure experienced wellbeing: how people are feeling and functioning in their everyday lives. The survey instrument, called the Happiness Pulse, was developed in partnership with the New Economics Foundation (NEF) with the dual aim of collecting citywide wellbeing data and engaging individuals and communities in the measurement and promotion of their own wellbeing. The survey domains and items were selected through a review of the academic literature and a stakeholder engagement process, including local policymakers, community organisations and individuals. The Happiness Pulse was included in the Bristol pilot of the Happy City Index (n=722). The experienced wellbeing items were subjected to factor analysis. A reduced number of items to be included in a revised scale for future data collection were again entered into a factor analysis. These revised factors were tested for reliability and validity. Among items to be included in a revised scale for future data collection three factors emerged: Be, Do and Connect. The Be factor had good reliability, convergent and criterion validity. The Do factor had good discriminant validity. The Connect factor had adequate reliability and good discriminant and criterion validity. Some age, gender and socioeconomic differentiation was found. The properties of a new scale to measure experienced wellbeing, intended for use by municipal authorities, are described. Happiness Pulse data can be combined with local data on wellbeing conditions to determine what matters for peoples wellbeing across a city and why.

Keywords: city wellbeing , community wellbeing, engaging individuals and communities, measuring wellbeing and happiness

Procedia PDF Downloads 235
2361 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 96
2360 Contention Window Adjustment in IEEE 802.11-based Industrial Wireless Networks

Authors: Mohsen Maadani, Seyed Ahmad Motamedi

Abstract:

The use of wireless technology in industrial networks has gained vast attraction in recent years. In this paper, we have thoroughly analyzed the effect of contention window (CW) size on the performance of IEEE 802.11-based industrial wireless networks (IWN), from delay and reliability perspective. Results show that the default values of CWmin, CWmax, and retry limit (RL) are far from the optimum performance due to the industrial application characteristics, including short packet and noisy environment. An adaptive CW algorithm (payload-dependent) has been proposed to minimize the average delay. Finally a simple, but effective CW and RL setting has been proposed for industrial applications which outperforms the minimum-average-delay solution from maximum delay and jitter perspective, at the cost of a little higher average delay. Simulation results show an improvement of up to 20%, 25%, and 30% in average delay, maximum delay and jitter respectively.

Keywords: average delay, contention window, distributed coordination function (DCF), jitter, industrial wireless network (IWN), maximum delay, reliability, retry limit

Procedia PDF Downloads 397
2359 Psychological Testing in Industrial/Organizational Psychology: Validity and Reliability of Psychological Assessments in the Workplace

Authors: Melissa C. Monney

Abstract:

Psychological testing has been of interest to researchers for many years as useful tools in assessing and diagnosing various disorders as well as to assist in understanding human behavior. However, for over 20 years now, researchers and laypersons alike have been interested in using them for other purposes, such as determining factors in employee selection, promotion, and even termination. In recent years, psychological assessments have been useful in facilitating workplace decision processing, regarding employee circulation within organizations. This literature review explores four of the most commonly used psychological tests in workplace environments, namely cognitive ability, emotional intelligence, integrity, and personality tests, as organizations have used these tests to assess different factors of human behavior as predictive measures of future employee behaviors. The findings suggest that while there is much controversy and debate regarding the validity and reliability of these tests in workplace settings as they were not originally designed for these purposes, the use of such assessments in the workplace has been useful in decreasing costs and employee turnover as well as increase job satisfaction by ensuring the right employees are selected for their roles.

Keywords: cognitive ability, personality testing, predictive validity, workplace behavior

Procedia PDF Downloads 221
2358 The Attitude and Willingness to Use Telecare for Arthritis Patients

Authors: Jui-Chen Huang

Abstract:

Nowadays, the population is aging, the number of people who need to be taken care of is increased, but the manpower and funding are insufficient. Therefore, this study aims to explore the attitudes and willingness of arthritis patients to adopt telecare and to take a large medical institution in the central area of Taiwan as a sample hospital. A structured questionnaire (using the Likert five-point scale) was used to collect chronic patients over 20 years old as sample data, and a total of 500 valid questionnaires were effectively collected. The SPSS 18.0 statistical software was used for reliability analysis and independent sample t-test to explore the differences in attitudes and willingness to use telecare for arthritis patients and non-arthritic patients. The Cronbach's alpha value of this study questionnaire was above 0.94, showing good reliability. Arthritis patients and non-arthritic patients had statistically significant differences in attitudes toward telecare, while the willingness to use did not reach statistically significant differences. In addition, the average attitude and intention of arthritis patients for telecare are 3.38 and 3.41, respectively, indicating that arthritis patients have a certain degree of attitude and willingness to adopt telecare, which is worthy of follow-up research and practical industry push.

Keywords: telecare, arthritis patients, attitudes, intention

Procedia PDF Downloads 121
2357 Comparison of Isokinetic Powers (Flexion and Knee Extension) of Basketball and Football Players (Age 17–20)

Authors: Ugur Senturk, Ibrahım Erdemır, Faruk Guven, Cuma Ece

Abstract:

The objective of this study is to compare flexion and extension movements in knee-joint group by measuring isokinetic knee power of amateur basketball and football players. For this purpose, total 21 players were included, which consist of football players (n=12) and basketball players (n=9), within the age range of 17–20. After receiving the age, length, body weight, vertical jump, and BMI measurements of all subjects, the measurement of lower extremity knee-joint movement (Flexion-Extension) was made with isokinetic dynamometer (isomed 2000) at 60 o/sec. and 240 o/sec. angular velocity. After arrangement and grouping of collected information forms and knee flexion and extension parameters, all data were analyzed with SPSS for Windows. Descriptive analyses of the parameters were made. Non-parametric t test and Mann-Whitney U test were used to compare the parameters of football players and basketball players and to find the inter-group differences. The comparisons and relations in the range p<0.05 and p<0.01 between the groups were surveyed. As a conclusion, no statistical differences were found between isokinetic knee flexion and extension parameters of football and basketball players. However, it was found that the football players were older than the basketball players. In addition to this, the average values of the basketball players in the highest torque and the highest torque average curve were found higher than football players in comparisons of left knee extension. However, it was found that fat levels of the basketball players were found to be higher than the football players.

Keywords: isokinetic contraction, isokinetic dynamometer, peak torque, flexion, extension, football, basketball

Procedia PDF Downloads 513
2356 Stochastic Analysis of Linux Operating System through Copula Distribution

Authors: Vijay Vir Singh

Abstract:

This work is focused studying the Linux operating system connected in a LAN (local area network). The STAR topology (to be called subsystem-1) and BUS topology (to be called subsystem-2) are taken into account, which are placed at two different locations and connected to a server through a hub. In the both topologies BUS topology and STAR topology, we have assumed n clients. The system has two types of failures i.e. partial failure and complete failure. Further, the partial failure has been categorized as minor and major partial failure. It is assumed that the minor partial failure degrades the sub-systems and the major partial failure make the subsystem break down mode. The system may completely fail due to failure of server hacking and blocking etc. The system is studied using supplementary variable technique and Laplace transform by using different types of failure and two types of repair. The various measures of reliability for example, availability of system, reliability of system, MTTF, profit function for different parametric values have been discussed.

Keywords: star topology, bus topology, blocking, hacking, Linux operating system, Gumbel-Hougaard family copula, supplementary variable

Procedia PDF Downloads 345
2355 An Investigation on Organisation Cyber Resilience

Authors: Arniyati Ahmad, Christopher Johnson, Timothy Storer

Abstract:

Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and critical information infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates organisation cyber resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called C-Suite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.

Keywords: critical information infrastructure, cyber resilience, organisation cyber resilience, reliability test

Procedia PDF Downloads 340
2354 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 51
2353 Analysis of a Discrete-time Geo/G/1 Queue Integrated with (s, Q) Inventory Policy at a Service Facility

Authors: Akash Verma, Sujit Kumar Samanta

Abstract:

This study examines a discrete-time Geo/G/1 queueing-inventory system attached with (s, Q) inventory policy. Assume that the customers follow the Bernoulli process on arrival. Each customer demands a single item with arbitrarily distributed service time. The inventory is replenished by an outside supplier, and the lead time for the replenishment is determined by a geometric distribution. There is a single server and infinite waiting space in this facility. Demands must wait in the specified waiting area during a stock-out period. The customers are served on a first-come-first-served basis. With the help of the embedded Markov chain technique, we determine the joint probability distributions of the number of customers in the system and the number of items in stock at the post-departure epoch using the Matrix Analytic approach. We relate the system length distribution at post-departure and outside observer's epochs to determine the joint probability distribution at the outside observer's epoch. We use probability distributions at random epochs to determine the waiting time distribution. We obtain the performance measures to construct the cost function. The optimum values of the order quantity and reordering point are found numerically for the variety of model parameters.

Keywords: discrete-time queueing inventory model, matrix analytic method, waiting-time analysis, cost optimization

Procedia PDF Downloads 13
2352 Simplifying the Migration of Architectures in Embedded Applications Introducing a Pattern Language to Support the Workforce

Authors: Farha Lakhani, Michael J. Pont

Abstract:

There are two main architectures used to develop software for modern embedded systems: these can be labelled as “event-triggered” (ET) and “time-triggered” (TT). The research presented in this paper is concerned with the issues involved in migration between these two architectures. Although TT architectures are widely used in safety-critical applications they are less familiar to developers of mainstream embedded systems. The research presented in this paper began from the premise that–for a broad class of systems that have been implemented using an ET architecture–migration to a TT architecture would improve reliability. It may be tempting to assume that conversion between ET and TT designs will simply involve converting all event-handling software routines into periodic activities. However, the required changes to the software architecture are, in many cases rather more profound. The main contribution of the work presented in this paper is to identify ways in which the significant effort involved in migrating between existing ET architectures and “equivalent” (and effective) TT architectures could be reduced. The research described in this paper has taken an innovative step in this regard by introducing the use of ‘Design patterns’ for this purpose for the first time.

Keywords: embedded applications, software architectures, reliability, pattern

Procedia PDF Downloads 306
2351 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 74
2350 On Reliability of a Credit Default Swap Contract during the EMU Debt Crisis

Authors: Petra Buzkova, Milos Kopa

Abstract:

Reliability of the credit default swap market had been questioned repeatedly during the EMU debt crisis. The article examines whether this development influenced sovereign EMU CDS prices in general. We regress the CDS market price on a model risk neutral CDS price obtained from an adopted reduced form valuation model in the 2009-2013 period. We look for a break point in the single-equation and multi-equation econometric models in order to show the changes in relations between CDS market and model prices. Our results differ according to the risk profile of a country. We find that in the case of riskier countries, the relationship between the market and model price changed when market participants started to question the ability of CDS contracts to protect their buyers. Specifically, it weakened after the change. In the case of less risky countries, the change happened earlier and the effect of a weakened relationship is not observed.

Keywords: chow stability test, credit default swap, debt crisis, reduced form valuation model, seemingly unrelated regression

Procedia PDF Downloads 240
2349 Joint Optimal Pricing and Lot-Sizing Decisions for an Advance Sales System under Stochastic Conditions

Authors: Maryam Ghoreishi, Christian Larsen

Abstract:

In this paper, we investigate the effect of stochastic inputs on problem of joint optimal pricing and lot-sizing decisions where the inventory cycle is divided into advance and spot sales periods. During the advance sales period, customer can make reservations while customer with reservations can cancel their order. However, during the spot sales period customers receive the order as soon as the order is placed, but they cannot make any reservation or cancellation during that period. We assume that the inter arrival times during the advance sales and spot sales period are exponentially distributed where the arrival rate is decreasing function of price. Moreover, we assume that the number of cancelled reservations is binomially distributed. In addition, we assume that deterioration process follows an exponential distribution. We investigate two cases. First, we consider two-state case where we find the optimal price during the spot sales period and the optimal price during the advance sales period. Next, we develop a generalized case where we extend two-state case also to allow dynamic prices during the spot sales period. We apply the Markov decision theory in order to find the optimal solutions. In addition, for the generalized case, we apply the policy iteration algorithm in order to find the optimal prices, the optimal lot-size and maximum advance sales amount.

Keywords: inventory control, pricing, Markov decision theory, advance sales system

Procedia PDF Downloads 302
2348 Finite Element Simulation of RC Exterior Beam-Column Joints Using Damage Plasticity Model

Authors: A. M. Halahla, M. H. Baluch, M. K. Rahman, A. H. Al-Gadhib, M. N. Akhtar

Abstract:

In the present study, 3D simulation of a typical exterior (RC) beam–column joint (BCJ) strengthened with carbon fiber-reinforced plastic (CFRP) sheet are carried out. Numerical investigations are performed using a nonlinear finite element ( FE) analysis by incorporating damage plasticity model (CDP), for material behaviour the concrete response in compression, tension softening were used, linear plastic with isotropic hardening for reinforcing steel, and linear elastic lamina material model for CFRP sheets using the commercial FE software ABAQUS. The numerical models developed in the present study are validated with the results obtained from the experiment under monotonic loading using the hydraulic Jack in displacement control mode. The experimental program includes casting of deficient BCJ loaded to failure load for both un-strengthened and strengthened BCJ. The failure mode, and deformation response of CFRP strengthened and un-strengthened joints and propagation of damage in the components of BCJ are discussed. Finite element simulations are compared with the experimental result and are noted to yield reasonable comparisons. The damage plasticity model was able to capture with good accuracy of the ultimate load and the mode of failure in the beam column joint.

Keywords: reinforced concrete, exterior beam-column joints, concrete damage plasticity model, computational simulation, 3-D finite element model

Procedia PDF Downloads 357
2347 An Inflatable and Foldable Knee Exosuit Based on Intelligent Management of Biomechanical Energy

Authors: Jing Fang, Yao Cui, Mingming Wang, Shengli She, Jianping Yuan

Abstract:

Wearable robotics is a potential solution in aiding gait rehabilitation of lower limbs dyskinesia patients, such as knee osteoarthritis or stroke afflicted patients. Many wearable robots have been developed in the form of rigid exoskeletons, but their bulk devices, high cost and control complexity hinder their popularity in the field of gait rehabilitation. Thus, the development of a portable, compliant and low-cost wearable robot for gait rehabilitation is necessary. Inspired by Chinese traditional folding fans and balloon inflators, the authors present an inflatable, foldable and variable stiffness knee exosuit (IFVSKE) in this paper. The pneumatic actuator of IFVSKE was fabricated in the shape of folding fans by using thermoplastic polyurethane (TPU) fabric materials. The geometric and mechanical properties of IFVSKE were characterized with experimental methods. To assist the knee joint smartly, an intelligent control profile for IFVSKE was proposed based on the concept of full-cycle energy management of the biomechanical energy during human movement. The biomechanical energy of knee joints in a walking gait cycle of patients could be collected and released to assist the joint motion just by adjusting the inner pressure of IFVSKE. Finally, a healthy subject was involved to walk with and without the IFVSKE to evaluate the assisting effects.

Keywords: biomechanical energy management, knee exosuit, gait rehabilitation, wearable robotics

Procedia PDF Downloads 140
2346 Structure Design of Vacuum Vessel with Large Openings for Spacecraft Thermal Vacuum Test

Authors: Han Xiao, Ruan Qi, Zhang Lei, Qi Yan

Abstract:

Space environment simulator is a facility used to conduct thermal test for spacecraft, and vacuum vessel is the main body of it. According to the requirements for thermal tests of the spacecraft and its solar array panels, the primary vessel and the side vessels are designed to be a combinative structure connected with aperture, which ratio reaches 0.7. Since the vacuum vessel suffers 0.1MPa external pressure during the process of thermal test, in order to ensure the simulator’s reliability and safety, it’s necessary to calculate the vacuum vessel’s intensity and stability. Based on the impact of large openings to vacuum vessel structure, this paper explored the reinforce design and analytical way of vacuum vessel with large openings, using a large space environment simulator’s vacuum vessel design as an example. Tests showed that the reinforce structure is effective to fulfill the requirements of external pressure and the gravity. This ensured the reliability of the space environment simulator, providing a guarantee for developing the spacecraft.

Keywords: vacuum vessel, large opening, space environment simulator, structure design

Procedia PDF Downloads 503
2345 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence

Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar

Abstract:

This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.

Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves

Procedia PDF Downloads 176
2344 Validating the Theme Park Service Quality Scale: A Case Study of Zhuhai Chimelong Ocean Kingdom

Authors: Kat Jingjing Luo

Abstract:

The development of theme parks in China has been through a rapid growth in the past decades. Increasing competition within service quality has forced theme park managers concerned the relationship between service quality and visitors’ satisfaction. Even though those existing service quality measurements such as SERVQUAL and THEMEQUAL have been applied in related researches, none of them is exclusive for Chinese theme park service quality. This study aims to investigate the service quality of the most popular theme park in China currently and develop a unique, reliable and valid scale. The reliability and validity analysis results from a survey of over 200 tourists in Chimelong ocean kingdom in Zhuhai city, south of China, indicate that the dimension of waiting time is a discover factor in the measurement of Chinese theme park service quality excluding in the THEMEQUAL instrument (i.e., tangibles, reliability, responsiveness and access, assurance, empathy and courtesy). The newly developed scale gives a better understand service quality in Chinese theme park industry, and the managerial implications in regard to the research, how to improve theme park service quality are discussed.

Keywords: theme park, scale development, China, service quality

Procedia PDF Downloads 253
2343 Subsea Control Module (SCM) - A Vital Factor for Well Integrity and Production Performance in Deep Water Oil and Gas Fields

Authors: Okoro Ikechukwu Ralph, Fuat Kara

Abstract:

The discoveries of hydrocarbon reserves has clearly drifted offshore, and in deeper waters - areas where the industry still has limited knowledge; and that were hitherto, regarded as being out of reach. This shift presents significant and increased challenges in technology requirements needed to guarantee safety of personnel, environment and equipment; ensure high reliability of installed equipment; and provide high level of confidence in security of investment and company reputation. Nowhere are these challenges more apparent than on subsea well integrity and production performance. The past two decades has witnessed enormous rise in deep and ultra-deep water offshore field developments for the recovery of hydrocarbons. Subsea installed equipment at the seabed has been the technology of choice for these developments. This paper discusses the role of Subsea Control module (SCM) as a vital factor for deep-water well integrity and production performance. A case study for Deep-water well integrity and production performance is analysed.

Keywords: offshore reliability, production performance, subsea control module, well integrity

Procedia PDF Downloads 487
2342 Marginalized Two-Part Joint Models for Generalized Gamma Family of Distributions

Authors: Mohadeseh Shojaei Shahrokhabadi, Ding-Geng (Din) Chen

Abstract:

Positive continuous outcomes with a substantial number of zero values and incomplete longitudinal follow-up are quite common in medical cost data. To jointly model semi-continuous longitudinal cost data and survival data and to provide marginalized covariate effect estimates, a marginalized two-part joint model (MTJM) has been developed for outcome variables with lognormal distributions. In this paper, we propose MTJM models for outcome variables from a generalized gamma (GG) family of distributions. The GG distribution constitutes a general family that includes approximately all of the most frequently used distributions like the Gamma, Exponential, Weibull, and Log Normal. In the proposed MTJM-GG model, the conditional mean from a conventional two-part model with a three-parameter GG distribution is parameterized to provide the marginal interpretation for regression coefficients. In addition, MTJM-gamma and MTJM-Weibull are developed as special cases of MTJM-GG. To illustrate the applicability of the MTJM-GG, we applied the model to a set of real electronic health record data recently collected in Iran, and we provided SAS code for application. The simulation results showed that when the outcome distribution is unknown or misspecified, which is usually the case in real data sets, the MTJM-GG consistently outperforms other models. The GG family of distribution facilitates estimating a model with improved fit over the MTJM-gamma, standard Weibull, or Log-Normal distributions.

Keywords: marginalized two-part model, zero-inflated, right-skewed, semi-continuous, generalized gamma

Procedia PDF Downloads 158