Search results for: regression models drone
7199 Comparing Business Excellence Models Using Quantitative Methods: A First Step
Authors: Mohammed Alanazi, Dimitrios Tsagdis
Abstract:
Established Business Excellence Models (BEMs), like the Malcolm Baldrige National Quality Award (MBNQA) model and the European Foundation for Quality Management (EFQM) model, have been adopted by firms all over the world. They exist alongside more recent country-specific BEMs; e.g. the Australian, Canadian, China, New Zealand, Singapore, and Taiwan quality awards that although not as widespread as MBNQA and EFQM have nonetheless strong national followings. Regardless of any differences in their following or prestige, the emergence and development of all BEMs have been shaped both by their local context (e.g. underlying socio-economic dynamics) as well as by global best practices. Besides such similarities, that render them into objects (i.e. models) of the same class (i.e. BEMs), BEMs exhibit non-trivial differences in their criteria, relations, and emphasis. Given the evolution of BEMs (e.g. the MBNQA underwent seven evolutions since its inception in 1987 while the EFQM five since 1993), it is unsurprising that comparative studies of their validity are few and far in between. This poses challenges for practitioners and policy makers alike; as it is not always clear which BEM is to be preferred or better fitting to a particular context. Especially, in contexts that differ substantially from the original context of BEM development. This paper aims to fill this gap by presenting a research design and measurement model for comparing BEMs using quantitative methods (e.g. structural equations). Three BEMs will be focused upon in particular for illustration purposes; the MBNQA, the EFQM, and the King Abdul Aziz Quality Award (KAQA) model. They have been selected so to reflect the two established and widely spread traditions as well as a more recent context-specific arrival promising a better fit.Keywords: Baldrige, business excellence, European Foundation for Quality Management, Structural Equation Model, total quality management
Procedia PDF Downloads 2387198 Risk of Androgen Deprivation Therapy-Induced Metabolic Syndrome-Related Complications for Prostate Cancer in Taiwan
Authors: Olivia Rachel Hwang, Yu-Hsuan Joni Shao
Abstract:
Androgen Deprivation Therapy (ADT) has been a primary treatment for patients with advanced prostate cancer. However, it is associated with numerous adverse effects related to Metabolic Syndrome (MetS), including hypertension, diabetes, hyperlipidaemia, heart diseases and ischemic strokes. However, complications associated with ADT for prostate cancer in Taiwan is not well documented. The purpose of this study is to utilize the data from NHIRD (National Health Insurance Research Database) to examine the trajectory changes of MetS-related complications in men receiving ADT. The risks of developing complications after the treatment were analyzed with multivariate Cox regression model. Covariates including in the model were the complications before the diagnosis of prostate cancer, the age, and the year at cancer diagnosis. A total number of 17268 patients from 1997-2013 were included in this study. The exclusion criteria were patients with any other types of cancer or with the existing MetS-related complications. Changes in MetS-related complications were observed among two treatment groups: 1) ADT (n=9042), and 2) non-ADT (n=8226). The ADT group appeared to have an increased risk in hypertension (hazard ratio 1.08, 95% confidence interval 1.03-1.13, P = 0.001) and hyperlipidemia (hazard ratio 1.09, 95% confidence interval 1.01-1.17, P = 0.02) when compared with non-ADT group in the multivariate Cox regression analyses. In the risk of diabetes, heart diseases, and ischemic strokes, ADT group appeared to have an increased but not significant hazard ratio. In conclusion, ADT was associated with an increased risk in hypertension and hyperlipidemia in prostate cancer patients in Taiwan. The risk of hypertension and hyperlipidemia should be considered while deciding on ADT, especially those with the known history of hypertension and hyperlipidemia.Keywords: androgen deprivation therapy, ADT, complications, metabolic syndrome, MetS, prostate cancer
Procedia PDF Downloads 2887197 Revealing the Risks of Obstructive Sleep Apnea
Authors: Oyuntsetseg Sandag, Lkhagvadorj Khosbayar, Naidansuren Tsendeekhuu, Densenbal Dansran, Bandi Solongo
Abstract:
Introduction: Obstructive sleep apnea (OSA) is a common disorder affecting at least 2% to 4% of the adult population. It is estimated that nearly 80% of men and 93% of women with moderate to severe sleep apnea are undiagnosed. A number of screening questionnaires and clinical screening models have been developed to help identify patients with OSA, also it’s indeed to clinical practice. Purpose of study: Determine dependence of obstructive sleep apnea between for severe risk and risk factor. Material and Methods: A cross-sectional study included 114 patients presenting from theCentral state 3th hospital and Central state 1th hospital. Patients who had obstructive sleep apnea (OSA)selected in this study. Standard StopBang questionnaire was obtained from all patients.According to the patients’ response to the StopBang questionnaire was divided into low risk, intermediate risk, and high risk.Descriptive statistics were presented mean ± standard deviation (SD). Each questionnaire was compared on the likelihood ratio for a positive result, the likelihood ratio for a negative test result of regression. Statistical analyses were performed utilizing SPSS 16. Results: 114 patients were obtained (mean age 48 ± 16, male 57)that divided to low risk 54 (47.4%), intermediate risk 33 (28.9%), high risk 27 (23.7%). Result of risk factor showed significantly increasing that mean age (38 ± 13vs. 54 ± 14 vs. 59 ± 10, p<0.05), blood pressure (115 ± 18vs. 133 ± 19vs. 142 ± 21, p<0.05), BMI(24 IQR 22; 26 vs. 24 IQR 22; 29 vs. 28 IQR 25; 34, p<0.001), neck circumference (35 ± 3.4 vs. 38 ± 4.7 vs. 41 ± 4.4, p<0.05)were increased. Results from multiple logistic regressions showed that age is significantly independently factor for OSA (odds ratio 1.07, 95% CI 1.02-1.23, p<0.01). Predictive value of age was significantly higher factor for OSA (AUC=0.833, 95% CI 0.758-0.909, p<0.001). Our study showing that risk of OSA is beginning 47 years old (sensitivity 78.3%, specifity74.1%). Conclusions: According to most of all patients’ response had intermediate risk and high risk. Also, age, blood pressure, neck circumference and BMI were increased such as risk factor was increased for OSA. Especially age is independently factor and highest significance for OSA. Patients’ age one year is increased likelihood risk factor 1.1 times is increased.Keywords: obstructive sleep apnea, Stop-Bang, BMI (Body Mass Index), blood pressure
Procedia PDF Downloads 3107196 Why and When to Teach Definitions: Necessary and Unnecessary Discontinuities Resulting from the Definition of Mathematical Concepts
Authors: Josephine Shamash, Stuart Smith
Abstract:
We examine reasons for introducing definitions in teaching mathematics in a number of different cases. We try to determine if, where, and when to provide a definition, and which definition to choose. We characterize different types of definitions and the different purposes we may have for formulating them, and detail examples of each type. Giving a definition at a certain stage can sometimes be detrimental to the development of the concept image. In such a case, it is advisable to delay the precise definition to a later stage. We describe two models, the 'successive approximation model', and the 'model of the extending definition' that fit such situations. Detailed examples that fit the different models are given based on material taken from a number of textbooks, and analysis of the way the concept is introduced, and where and how its definition is given. Our conclusions, based on this analysis, is that some of the definitions given may cause discontinuities in the learning sequence and constitute obstacles and unnecessary cognitive conflicts in the formation of the concept definition. However, in other cases, the discontinuity in passing from definition to definition actually serves a didactic purpose, is unavoidable for the mathematical evolution of the concept image, and is essential for students to deepen their understanding.Keywords: concept image, mathematical definitions, mathematics education, mathematics teaching
Procedia PDF Downloads 1297195 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1037194 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring
Authors: Hyun-Woo Cho
Abstract:
Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.Keywords: calibration model, monitoring, quality improvement, feature selection
Procedia PDF Downloads 3567193 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves
Authors: Hanifeh Imanian, Morteza Kolahdoozan
Abstract:
The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill
Procedia PDF Downloads 2337192 Effect of Plasticizer Additives on the Mechanical Properties of Cement Composite: A Molecular Dynamics Analysis
Authors: R. Mohan, V. Jadhav, A. Ahmed, J. Rivas, A. Kelkar
Abstract:
Cementitious materials are an excellent example of a composite material with complex hierarchical features and random features that range from nanometer (nm) to millimeter (mm) scale. Multi-scale modeling of complex material systems requires starting from fundamental building blocks to capture the scale relevant features through associated computational models. In this paper, molecular dynamics (MD) modeling is employed to predict the effect of plasticizer additive on the mechanical properties of key hydrated cement constituent calcium-silicate-hydrate (CSH) at the molecular, nanometer scale level. Due to complexity, still unknown molecular configuration of CSH, a representative configuration widely accepted in the field of mineral Jennite is employed. The effectiveness of the Molecular Dynamics modeling to understand the predictive influence of material chemistry changes based on molecular/nanoscale models is demonstrated.Keywords: cement composite, mechanical properties, molecular dynamics, plasticizer additives
Procedia PDF Downloads 4547191 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 4297190 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems
Authors: Belkacem Laimouche
Abstract:
With the field of artificial intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.Keywords: artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, interlaboratory comparison, data analysis, data reliability, measurement of bias impact on predictions, improvement of model accuracy and reliability
Procedia PDF Downloads 1057189 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 1797188 Use of Real Time Ultrasound for the Prediction of Carcass Composition in Serrana Goats
Authors: Antonio Monteiro, Jorge Azevedo, Severiano Silva, Alfredo Teixeira
Abstract:
The objective of this study was to compare the carcass and in vivo real-time ultrasound measurements (RTU) and their capacity to predict the composition of Serrana goats up to 40% of maturity. Twenty one females (11.1 ± 3.97 kg) and Twenty one males (15.6 ± 5.38 kg) were utilized to made in vivo measurements with a 5 MHz probe (ALOKA 500V scanner) at the 9th-10th, 10th-11th thoracic vertebrae (uT910 and uT1011, respectively), at the 1st- 2nd, 3rd-4th, and 4th-5th lumbar vertebrae (uL12, ul34 and uL45, respectively) and also at the 3rd-4th sternebrae (EEST). It was recorded the images of RTU measurements of Longissimus thoracis et lumborum muscle (LTL) depth (EM), width (LM), perimeter (PM), area (AM) and subcutaneous fat thickness (SFD) above the LTL, as well as the depth of tissues of the sternum (EEST) between the 3rd-4th sternebrae. All RTU images were analyzed using the ImageJ software. After slaughter, the carcasses were stored at 4 ºC for 24 h. After this period the carcasses were divided and the left half was entirely dissected into muscle, dissected fat (subcutaneous fat plus intermuscular fat) and bone. Prior to the dissection measurements equivalent to those obtained in vivo with RTU were recorded. Using the Statistica 5, correlation and regression analyses were performed. The prediction of carcass composition was achieved by stepwise regression procedure, with live weight and RTU measurements with and without transformation of variables to the same dimension. The RTU and carcass measurements, except for SFD measurements, showed high correlation (r > 0.60, P < 0.001). The RTU measurements and the live weight, showed ability to predict carcass composition on muscle (R2 = 0.99, P < 0.001), subcutaneous fat (R2 = 0.41, P < 0.001), intermuscular fat (R2 = 0.84, P < 0.001), dissected fat (R2 = 0.71, P < 0.001) and bone (R2 = 0.94, P < 0.001). The transformation of variables allowed a slight increase of precision, but with the increase in the number of variables, with the exception of subcutaneous fat prediction. In vivo measurements by RTU can be applied to predict kid goat carcass composition, from 5 measurements of RTU and the live weight.Keywords: carcass, goats, real time, ultrasound
Procedia PDF Downloads 2617187 Effects of Crisis-Induced Emotions on in-Crisis Protective Behavior and Post-Crisis Perception: An Analysis of Survey Data for the 2015 Middle East Respiratory Syndrome in South Korea
Authors: Myoungsoon You, Heejung Son
Abstract:
Background: In the current study, we investigated the effects of emotions induced by an infectious disease outbreak on the various protective behaviors taken during the crisis and on the perception after the crisis. The investigation was based on two psychological theories of appraisal tendency and action tendency. Methods: A total of 900 participants in South Korea who experienced the 2015 Middle East Respiratory Syndrome outbreak were sampled by a professional survey agency. To assess the influence of the emotions fear and anger, a regression approach was used. The effect of emotions on various protective behaviors and perceptions was observed using a hierarchical regression method. Results: Fear and anger induced by the infectious disease outbreak were both associated with increased protective behaviors during the crisis. However, the differences between the emotions were observed. While protective behaviors with avoidance tendency (adherence to recommendations, self-mitigation), were raised by both fear and anger, protective behaviors with approach tendency (information-seeking) were increased by anger, but not fear. Regarding the effect of emotion on the risk perception after the crisis, only fear was associated with a higher level of risk perception. Conclusions: This study confirmed the role of emotions in crisis protective behaviors and post-crisis perceptions regarding an infectious disease outbreak. These findings could enhance understanding of the public’s protective behaviors during infectious disease outbreaks and afterward risk perception corresponding to emotions. The results also suggested strategies for communicating with the public that takes into account emotions that are prominently induced by crises associated with disease outbreaks.Keywords: crisis communication, emotion, infectious disease outbreak, protective behavior, risk perception
Procedia PDF Downloads 2757186 Economics of Milled Rice Marketing in Gombe Metropolis, Gombe State, Nigeria
Authors: Suleh Yusufu Godi, Ado Makama Adamu
Abstract:
Marketing involves all the legal, physical, and economic services which are necessary in moving products from producer to consumers. The more efficient the marketing functions are performed the better the marketing system for the farmers, marketing agents, and the society at large. Rice marketing ensures the flow of product from producers to consumers in the form, time and place of need. Therefore, this study examined profitability of milled rice marketing in Gombe metropolis, Gombe State. Data were collected using structured questionnaires from ninety randomly selected rice marketers in Gombe metropolis. The data were analyzed using descriptive statistics, farm budget technique and regression analysis. The study revealed the total rice marketing cost incurred by rice marketers to be N6, 610,214.70. This gave an average of N73, 446.83 per marketer and N37.30 per Kilogram of rice. The Gross Income for rice marketers in Gombe metropolis was N15, 064,600.00. This value gave an average of N167, 384.44 per rice marketer or N85.00 per kilogram of rice. The study also revealed net income for all rice marketers to be N8, 454,385.30. This gave an average of N93, 937.61 per rice marketer or N47.70 per Kilogram of rice. The study further revealed a marketing margin, marketing efficiency and return per naira invested on rice marketing to be 39.30%, 150.16% and N0.56, respectively. The result of regression analysis shows that age, sex and cost of transportation are positive and significantly affect marketing margin of rice marketers in Gombe Metropolis. However, the main constraints to rice marketing in Gombe metropolis include inadequate electricity, capital, high transportation cost, instability of prices and low patronage among others. The study recommends provision of adequate electrical power supply in the State especially the State capital and also encouraging rice marketers in Gombe metropolis to form cooperative societies so as to have easy access to credit facilities especially from the formal sources.Keywords: rice marketers, milled rice, cost and return, marketing margin, efficiency, profitability
Procedia PDF Downloads 807185 Multiplying Vulnerability of Child Health Outcome and Food Diversity in India
Authors: Mukesh Ravi Raushan
Abstract:
Despite consideration of obesity as a deadly public health issue contributing 2.6 million deaths worldwide every year developing country like India is facing malnutrition and it is more common than in Sub-Saharan Africa. About one in every three malnourished children in the world lives in India. The paper assess the nutritional health among children using data from total number of 43737 infant and young children aged 0-59 months (µ = 29.54; SD = 17.21) of the selected households by National Family Health Survey, 2005-06. The wasting was measured by a Z-score of standardized weight-for-height according to the WHO child growth standards. The impact of education with place of residence was found to be significantly associated with the complementary food diversity score (CFDS) in India. The education of mother was positively associated with the CFDS but the degree of performance was lower in rural India than their counterpart from urban. The result of binary logistic regression on wasting with WHO seven types of recommended food for children in India suggest that child who consumed the milk product food (OR: 0.87, p<0.0001) were less likely to be malnourished than their counterparts who did not consume, whereas, in case of other food items as the child who consumed food product of seed (OR: 0.75, p<0.0001) were less likely to be malnourished than those who did not. The nutritional status among children were negatively associated with the protein containing complementary food given the child as those child who received pulse in last 24 hour were less likely to be wasted (OR: 0.87, p<0.00001) as compared to the reference categories. The frequency to feed the indexed child increases by 10 per cent the expected change in child health outcome in terms of wasting decreases by 2 per cent in India when place of residence, education, religion, and birth order were controlled. The index gets improved as the risk for malnutrition among children in India decreases.Keywords: CFDS, food diversity index, India, logistic regression
Procedia PDF Downloads 2617184 Risk of Fractures at Different Anatomic Sites in Patients with Irritable Bowel Syndrome: A Nationwide Population-Based Cohort Study
Authors: Herng-Sheng Lee, Chi-Yi Chen, Wan-Ting Huang, Li-Jen Chang, Solomon Chih-Cheng Chen, Hsin-Yi Yang
Abstract:
A variety of gastrointestinal disorders, such as Crohn’s disease, ulcerative colitis, and coeliac disease, are recognized as risk factors for osteoporosis and osteoporotic fractures. One recent study suggests that individuals with irritable bowel syndrome (IBS) might also be at increased risk of osteoporosis and osteoporotic fractures. Up to now, the association between IBS and the risk of fractures at different anatomic sites occurrences is not completely clear. We conducted a population-based cohort analysis to investigate the fracture risk of IBS in comparison with non-IBS group. We identified 29,505 adults aged ≥ 20 years with newly diagnosed IBS using the Taiwan National Health Insurance Research Database in 2000-2012. A comparison group was constructed of patients without IBS who were matched according to gender and age. The occurrence of fracture was monitored until the end of 2013. We analyzed the risk of fracture events to occur in IBS by using Cox proportional hazards regression models. Patients with IBS had a higher incidence of osteoporotic fractures compared with non-IBS group (12.34 versus 9.45 per 1,000 person-years) and an increased risk of osteoporotic fractures (adjusted hazard ratio [aHR] = 1.27, 95 % confidence interval [CI] = 1.20 – 1.35). Site specific analysis showed that the IBS group had a higher risk of fractures for spine, forearm, hip and hand than did the non-IBS group. With further stratification for gender and age, a higher aHR value for osteoporotic fractures in IBS group was seen across all age groups in males, but seen in elderly females. In addition, female, elderly, low income, hypertension, coronary artery disease, cerebrovascular disease, and depressive disorders as independent osteoporotic fracture risk factors in IBS patients. The IBS is considered as a risk factor for osteoporotic fractures, particularly in female individuals and fracture sites located at the spine, forearm, hip and hand.Keywords: irritable bowel syndrome, fracture, gender difference, longitudinal health insurance database, public health
Procedia PDF Downloads 2297183 The Relationships among Learning Emotion, Major Satisfaction, Learning Flow, and Academic Achievement in Medical School Students
Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek
Abstract:
This study explored whether academic emotion, major satisfaction, and learning flow are associated with academic achievement in medical school. We know that emotion and affective factors are important factors in students' learning and performance. Emotion has taken the stage in much of contemporary educational psychology literature, no longer relegated to secondary status behind traditionally studied cognitive constructs. Medical school students (n=164) completed academic emotion, major satisfaction, and learning flow online survey. Academic performance was operationalized as students' average grade on two semester exams. For data analysis, correlation analysis, multiple regression analysis, hierarchical multiple regression analyses and ANOVA were conducted. The results largely confirmed the hypothesized relations among academic emotion, major satisfaction, learning flow and academic achievement. Positive academic emotion had a correlation with academic achievement (β=.191). Positive emotion had 8.5% explanatory power for academic achievement. Especially, sense of accomplishment had a significant impact on learning performance (β=.265). On the other hand, negative emotion, major satisfaction, and learning flow did not affect academic performance. Also, there were differences in sense of great (F=5.446, p=.001) and interest (F=2.78, p=.043) among positive emotion, boredom (F=3.55, p=.016), anger (F=4.346, p=.006), and petulance (F=3.779, p=.012) among negative emotion by grade. This study suggested that medical students' positive emotion was an important contributor to their academic achievement. At the same time, it is important to consider that some negative emotions can act to increase one’s motivation. Of particular importance is the notion that instructors can and should create learning environment that foster positive emotion for students. In doing so, instructors improve their chances of positively impacting students’ achievement emotions, as well as their subsequent motivation, learning, and performance. This result had an implication for medical educators striving to understand the personal emotional factors that influence learning and performance in medical training.Keywords: academic achievement, learning emotion, learning flow, major satisfaction
Procedia PDF Downloads 2737182 Large Language Model Powered Chatbots Need End-to-End Benchmarks
Authors: Debarag Banerjee, Pooja Singh, Arjun Avadhanam, Saksham Srivastava
Abstract:
Autonomous conversational agents, i.e., chatbots, are becoming an increasingly common mechanism for enterprises to provide support to customers and partners. In order to rate chatbots, especially ones powered by Generative AI tools like Large Language Models (LLMs), we need to be able to accurately assess their performance. This is where chatbot benchmarking becomes important. In this paper, authors propose the use of a benchmark that they call the E2E (End to End) benchmark and show how the E2E benchmark can be used to evaluate the accuracy and usefulness of the answers provided by chatbots, especially ones powered by LLMs. The authors evaluate an example chatbot at different levels of sophistication based on both our E2E benchmark as well as other available metrics commonly used in the state of the art and observe that the proposed benchmark shows better results compared to others. In addition, while some metrics proved to be unpredictable, the metric associated with the E2E benchmark, which uses cosine similarity, performed well in evaluating chatbots. The performance of our best models shows that there are several benefits of using the cosine similarity score as a metric in the E2E benchmark.Keywords: chatbot benchmarking, end-to-end (E2E) benchmarking, large language model, user centric evaluation.
Procedia PDF Downloads 677181 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 1197180 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction
Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum
Abstract:
Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography
Procedia PDF Downloads 3257179 The Impact of Steel Connections on the Fire Resistance of Composite Buildings
Authors: Shuyuan Lin, Zhaohui Huang, Mizi Fan
Abstract:
In the majority of previous research into modelling large scale composite floor subjected to fire, the beam-to-column and beam-to-beam connections were assumed to behave either as pinned or rigid for simplicity, and the vertical shear and axial tension failures of the connection were not taken into account. We have recently developed robust two-noded connection models for modeling endplate and partial endplate steel connections under fire conditions. The main objective of this research is to systematically investigate the impact of the connections of protected beams, on the tensile membrane actions of supported floor slabs in which the failures of the connections, such as, axial tension, vertical shear and bending are accounted for. The models developed have very good numerical stability under a static solver condition, and can be used for large scale modelling of composite buildings in fire.Keywords: fire, steel structure, component-based model, beam-to-column connections
Procedia PDF Downloads 4507178 Prevalence, Antimicrobial Susceptibility Pattern and Associated Risk Factors for Salmonella Species and Escherichia coli from Raw Meat at Butchery Houses in Mekelle, Tigray, Ethiopia
Authors: Haftay Abraha Tadesse, Atsebaha Gebrekidan Kahsay, Mahumd Abdulkader
Abstract:
Background: Salmonella species and Escherichia coli are important foodborne pathogens affecting humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. This study was aimed to determine the prevalence, antimicrobial susceptibility patterns and associated risk factors for Salmonella species and E. coli in raw meat from butchery houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from January to September 2019. Socio-demographic data and risk factors were collected using a predesigned questionnaire. Meat samples were collected aseptically from the butchery houses and transported using icebox to Mekelle University, College of Veterinary Sciences for the isolation and identification of Salmonella species and E. coli, Antimicrobial susceptibility patterns were determined using Kirby disc diffusion method. Data obtained were cleaned and entered into Statistical Package for the Social Sciences version 22 and logistic regression models with odds ratio were calculated. P-value < 0.05 was considered as statistically significant. Results: A total of 153 out of 384 (39.8%) of the meat specimens were found to be contaminated. The contamination of Salmonella species and E. coli were 15.6% (n=60) and 20.8%) (n=80), respectively. Mixed contamination (Salmonella species and E. coli) was observed in 13 (3.4 %) of the analyzed. Poor washing hands regularly (AOR = 8.37; 95% CI: 2.75-25.50) and not using gloves during meat handling (AOR=11. 28; 95% CI: (4.69 27.10) were associated with an overall bacterial contamination.About 95.5% of the tested isolates were sensitive to chloramphenicol and norfloxacin while the resistance of amoxyclav_amoxicillin and erythromycin were both isolated bacteria species. The overall multidrug resistance pattern for Salmonella and E. coli were 51.4% (n=19) and 31.8% (14), respectively. Conclusion: Of the 153 (153/384) contaminated raw meat, 60 (15.6%) and 80 (20.8%) were contaminated by Salmonella species and E. coli, respectively. Poor hand washing practice and not using glove during meat handling showed significant association with bacterial contamination. Multidrug-resistant showed in Salmonella species and E. coli were 19 (51.4%) and 14 (31.8%), respectively.Keywords: antimicrobial susceptibility test, butchery houses, e. coli, salmonella species
Procedia PDF Downloads 527177 The Inherent Flaw in the NBA Playoff Structure
Authors: Larry Turkish
Abstract:
Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.Keywords: alignment, mediocrity, regression, z-score
Procedia PDF Downloads 1307176 Line Heating Forming: Methodology and Application Using Kriging and Fifth Order Spline Formulations
Authors: Henri Champliaud, Zhengkun Feng, Ngan Van Lê, Javad Gholipour
Abstract:
In this article, a method is presented to effectively estimate the deformed shape of a thick plate due to line heating. The method uses a fifth order spline interpolation, with up to C3 continuity at specific points to compute the shape of the deformed geometry. First and second order derivatives over a surface are the resulting parameters of a given heating line on a plate. These parameters are determined through experiments and/or finite element simulations. Very accurate kriging models are fitted to real or virtual surfaces to build-up a database of maps. Maps of first and second order derivatives are then applied on numerical plate models to evaluate their evolving shapes through a sequence of heating lines. Adding an optimization process to this approach would allow determining the trajectories of heating lines needed to shape complex geometries, such as Francis turbine blades.Keywords: deformation, kriging, fifth order spline interpolation, first, second and third order derivatives, C3 continuity, line heating, plate forming, thermal forming
Procedia PDF Downloads 4567175 The Relation between Coping Strategies with Stress and Mental Health Situation in Flying Addicted Family of Self Introducer and Private
Authors: Farnoush Haghanipour
Abstract:
Recent research studies relation between coping strategies with stress and mental health situation in flying addicted family of self-introducer and private, Units of Guilan province. For this purpose 251 family (parent, spouse), that referred to private and self-introducer centers to break out of drug are selected in random sampling form. Research method was cross sectional-descriptive and purpose of research was fixing of between kinds of coping strategies with stress and mental health condition with attention to demographic variables. Therefore to collection of information, coping strategies questionnaire (CSQ) and mental health questionnaire (GHQ) was used and finally data analyzed by descriptive statistical methods (average, standard deviation) and inferential statistical correlation coefficient and regression. Study of correlation coefficient between mental healths with problem focused emotional focused and detachment strategies in level more than %99 is confirmed. Also mental health with avoidant focused hasn't correlation in other words relation is between mental health with problem focused strategies (r= 0/34) and emotional focused with mental health (r=0.52) and detachment with mental health (r= 0.18) in meaningful level 0.05. And also relation is between emotional focused strategies and mental health (r= 0.034) that is meaningless in Alpha 0.05. Also relation between problem processed coping strategies and mental health situation with attention to demographic variable is meaningful and relation level verified in confidence level more than 0.99. And result of anticipation equation regression statistical test has most a have in problem focused coping strategy, mental health, but relation of the avoidant emotional, detachment strategy with mental health was meaningless with attention to demographic variables.Keywords: stress, coping strategy with stress, mental health, self introducer and private
Procedia PDF Downloads 3107174 Hydraulic Analysis of Irrigation Approach Channel Using HEC-RAS Model
Authors: Muluegziabher Semagne Mekonnen
Abstract:
This study was intended to show the irrigation water requirements and evaluation of canal hydraulics steady state conditions to improve on scheme performance of the Meki-Ziway irrigation project. The methodology used was the CROPWAT 8.0 model to estimate the irrigation water requirements of five major crops irrigated in the study area. The results showed that for the whole existing and potential irrigation development area of 2000 ha and 2599 ha, crop water requirements were 3,339,200 and 4,339,090.4 m³, respectively. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. In this study Hydraulic Analysis of Irrigation Canals Using HEC-RAS Model was conducted in Meki-Ziway Irrigation Scheme. The HEC-RAS model was tested in terms of error estimation and used to determine canal capacity potential.Keywords: HEC-RAS, irrigation, hydraulic. canal reach, capacity
Procedia PDF Downloads 607173 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 3017172 A New Approach to Interval Matrices and Applications
Authors: Obaid Algahtani
Abstract:
An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.Keywords: interval analysis, interval matrices, state space model, Kalman Filter
Procedia PDF Downloads 4257171 Intellectual Property and SMEs in the Baltic Sea Region: A Comparative Study on the Use of the Utility Model Protection
Authors: Christina Wainikka, Besrat Tesfaye
Abstract:
Several of the countries in the Baltic Sea region are ranked high in international innovations rankings, such as the Global Innovation Index and European Innovation Scoreboard. There are however some concerns in the performance of different countries. For example, there is a widely spread notion about “The Swedish Paradox”. Sweden is ranked high due to investments in R&D and patent activity, but the outcome is not as high as could be expected. SMEs in Sweden are also below EU average when it comes to registering intellectual property rights such as patents and trademarks. This study is concentrating on the protection of utility model. This intellectual property right does not exist in Sweden, but in for example Finland and Germany. The utility model protection is sometimes referred to as a “patent light” since it is easier to obtain than the patent protection but at the same time does cover technical solutions. In examining statistics on patent activities and activities in registering utility models it is clear that utility model protection is scarcely used in the countries that have the protection. In Germany 10 577 applications were made in 2021. In Finland there were 259 applications made in 2021. This can be compared with patent applications that were 58 568 in Germany in 2021 and 1 662 in Finland in 2021. In Sweden there has never been a protection for utility models. The only protection for technical solutions is patents and business secrets. The threshold for obtaining a patent is high, due to the legal requirements and the costs. The patent protection is there for often not chosen by SMEs in Sweden. This study examines whether the protection of utility models in other countries in the Baltic region provide SMEs in these countries with better options to protect their innovations. The legal methodology is comparative law. In order to study the effects of the legal differences statistics are examined and interviews done with SMEs from different industries.Keywords: baltic sea region, comparative law, SME, utility model
Procedia PDF Downloads 1147170 [Keynote Talk]: Software Reliability Assessment and Fault Tolerance: Issues and Challenges
Authors: T. Gayen
Abstract:
Although, there are several software reliability models existing today there does not exist any versatile model even today which can be used for the reliability assessment of software. Complex software has a large number of states (unlike the hardware) so it becomes practically difficult to completely test the software. Irrespective of the amount of testing one does, sometimes it becomes extremely difficult to assure that the final software product is fault free. The Black Box Software Reliability models are found be quite uncertain for the reliability assessment of various systems. As mission critical applications need to be highly reliable and since it is not always possible to ensure the development of highly reliable system. Hence, in order to achieve fault-free operation of software one develops some mechanism to handle faults remaining in the system even after the development. Although, several such techniques are currently in use to achieve fault tolerance, yet these mechanisms may not always be very suitable for various systems. Hence, this discussion is focused on analyzing the issues and challenges faced with the existing techniques for reliability assessment and fault tolerance of various software systems.Keywords: black box, fault tolerance, failure, software reliability
Procedia PDF Downloads 426