Search results for: statistical classifiers
3358 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 2833357 Efficacy of Sparganium stoloniferum–Derived Compound in the Treatment of Acne Vulgaris: A Pilot Study
Authors: Wanvipa Thongborisute, Punyaphat Sirithanabadeekul, Pichit Suvanprakorn, Anan Jiraviroon
Abstract:
Background: Acne vulgaris is one of the most common dermatologic problems, and can have a significant psychological and physical effect on patients. Propionibacterium acnes' roles in acne vulgaris involve the activation of toll-like receptor 4 (TLR4), and toll-like receptor 2 (TLR2) pathways. By activating these pathways, inflammatory events of acne lesions, comedogenesis and sebaceous lipogenesis can occur. Currently, there are several topical agents commonly use in treating acne vulgaris that are known to have an effect on TLRs, such as retinoic acid and adapalene, but these drugs still have some irritating effects. At present, there is an alarming increase in rate of bacterial resistance due to irrational used of antibiotics both orally and topically. For this reason, acne treatments should contain bioactive molecules targeting at the site of action for the most effective therapeutic effect with the least side effects. Sparganium stoloniferumis a Chinese aquatic herb containing a compound called Sparstolonin B (SsnB), which has been reported to selectively blocks Toll-like receptor 2 (TLR2) and Toll-like receptor 4 (TLR4)-mediated inflammatory signals. Therefore, this topical TLR2 and TLR4 antagonist, in a form of Sparganium stoloniferum-derived compound containing SsnB, should give a benefit in reducing inflammation of acne vulgaris lesions and providing an alternative treatments for patients with this condition. Materials and Methods: The objectives of this randomized double blinded split faced placebo controlled trial is to study the safety and efficacy of the Sparganium stoloniferum-derived compound. 32 volunteered patients with mild to moderate degree of acne vulgaris according to global acne grading system were included in the study. After being informed and consented the subjects were given 2 topical treatments for acne vulgaris, one being topical 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) and the other, placebo. The subjects were asked to apply each treatment to either half of the face daily morning and night by randomization for 8 weeks, and come in for a weekly follow up. For each visit, the patients went through a procedure of lesion counting, including comedones, papules, nodules, pustules, and cystic lesions. Results: During 8 weeks of experimentation, the result shows a reduction in total lesions number between the placebo and the treatment side show statistical significance starting at week 4, where the 95% confidence interval begin to no longer overlap, and shows a trend of continuing to be further apart. The decrease in the amount of total lesions between week 0 and week 8 of the placebo side shows no statistical significant at P value >0.05. While the decrease in the amount of total lesions of acne vulgaris of the treatment side comparing between week 0 and week 8 shows statistical significant at P value <0.001. Conclusion: The data demonstrates that 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) is more effective in treating acne vulgaris comparing to topical placebo in treating acne vulgaris, by showing significant reduction in the total numbers of acne lesions. Therefore, this topical Sparganium stoloniferum extraction could become a potential alternative treatment for acne vulgaris.Keywords: acne vulgaris, sparganium stoloniferum, sparstolonin B, toll-like receptor 2, toll-like receptor 4
Procedia PDF Downloads 1863356 Investigation of Effective Parameters on Water Quality of Iranian Rivers Using Hydrochemical and Statistical Methods
Authors: Maryam Sayadi, Rana Sedighpour, Hossein Rezaie
Abstract:
In this study, in order to evaluate water quality of Gamasiab and Gharehsoo rivers located in Kermanshah province, the information of a 5-year statistical period during the years 2014-2018 was used. To evaluate the hydrochemistry of water, first the type and hydrogeochemical facies of river water were determined using Stiff and Piper diagrams. Then, based on Gibbs diagram and combination diagrams, the factors controlling the chemical parameters of the two rivers were identified. Saturation indices were used to predict the possibility of dissolution and deposition of some minerals. Then, in order to classify water in different sections, fourteen water quality indicators for different uses along with WHO standard were used. Finally, factor analysis was used to determine the processes affecting the hydrochemistry of the two rivers. The results of this study showed that in both rivers, the predominant type and facies are bicarbonate of calcite. Also, the main factor in changing the chemical quality of water in both Gamasiab and Gharehsoo rivers is the water-rock reaction. According to the results of factor analysis in both rivers, two factors have the greatest impact on water quality in the region. Among the parameters of Gamasiab river in the first factor, HCO3-, Na+ and Cl-, respectively, had the highest factor loads, and in the second factor, SO42- and Mg2+ were selected as the main parameters. The parameters Ca2+, Cl- and Na have the highest factor loads in the first factor and in the second factor Mg2+ and SO42- have the highest factor loads in Gharehsoo river. The dissolution of carbonate formations due to their abundance and expansion in the two basins has a more significant effect on changing water chemistry. It has saturated the water of rivers with aragonite, calcite and dolomite. Due to the low contribution of the second factor in changing the chemical parameters, the water of both rivers is saturated with respect to evaporative minerals such as gypsum, halite and anhydrite in all stations. Based on Schoeller diagrams, Wilcox and other quality indicators in these two sections, the amount of main physicochemical parameters are in the desired range for drinking and agriculture. The results of Langelier, Ryznar, Larson-Skold and Puckorius indices showed that water is corrosive in industry.Keywords: factor analysis, hydrochemical, saturation index, surface water quality
Procedia PDF Downloads 1253355 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1663354 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1583353 The Effect of Impinging WC-12Co Particles Temperature on Thickness of HVOF Thermally Sprayed Coatings
Authors: M. Jalali Azizpour
Abstract:
In this paper, the effect of WC-12Co particle Temperature in HVOF thermal spraying process on the coating thickness has been studied. The statistical results show that the spray distance and oxygen-to-fuel ratio are more effective factors on particle characterization and thickness of HVOF thermal spraying coatings. Spray Watch diagnostic system, scanning electron microscopy (SEM), X-ray diffraction and thickness measuring system were used for this purpose.Keywords: HVOF, temperature thickness, velocity, WC-12Co
Procedia PDF Downloads 2393352 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems
Authors: Bronwen Wade
Abstract:
Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality
Procedia PDF Downloads 513351 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery
Abstract:
Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.Keywords: gender, football, soccer, traumatic brain injury
Procedia PDF Downloads 1403350 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices
Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays
Abstract:
Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.Keywords: ecological momentary assessment, real-time, stress, work
Procedia PDF Downloads 1603349 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1193348 The Methods of Customer Satisfaction Measurement and Its Statistical Analysis towards Sales and Logistic Activities in Food Sector
Authors: Seher Arslankaya, Bahar Uludağ
Abstract:
Meeting the needs and demands of customers and pleasing the customers are important requirements for companies in food sectors where the growth of competition is significantly unpredictable. Customer satisfaction is also one of the key concepts which is mainly driven by wide range of customer preference and expectation upon products and services introduced and delivered to them. In order to meet the customer demands, the companies that engage in food sectors are expected to have a well-managed set of Total Quality Management (TQM), which sets out to improve quality of products and services; to reduce costs and to increase customer satisfaction by restructuring traditional management practices. It aims to increase customer satisfaction by meeting (their) customer expectations and requirements. The achievement would be determined with the help of customer satisfaction surveys, which is done to obtain immediate feedback and to provide quick responses. In addition, the surveys would also assist the making of strategic planning which helps to anticipate customer future needs and expectations. Meanwhile, periodic measurement of customer satisfaction would be a must because with the better understanding of customers perceptions from the surveys (done by questioners), the companies would have a clear idea to identify their own strengths and weaknesses that help the companies keep their loyal customers; to stand in comparison toward their competitors and map out their future progress and improvement. In this study, we propose a survey based on customer satisfaction measurement method and its statistical analysis for sales and logistic activities of food firms. Customer satisfaction would be discussed in details. Furthermore, after analysing the data derived from the questionnaire that applied to customers by using the SPSS software, various results obtained from the application would be presented. By also applying ANOVA test, the study would analysis the existence of meaningful differences between customer demographic proportion and their perceptions. The purpose of this study is also to find out requirements which help to remove the effects that decrease customer satisfaction and produce loyal customers in food industry. For this purpose, the customer complaints are collected. Additionally, comments and suggestions are done according to the obtained results of surveys, which would be useful for the making-process of strategic planning in food industry.Keywords: customer satisfaction measurement and analysis, food industry, SPSS, TQM
Procedia PDF Downloads 2483347 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 4043346 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League
Authors: E. Gouya Bozorg
Abstract:
The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.Keywords: efficiency of a team, ranking, special table, theoretical mathematic
Procedia PDF Downloads 4173345 Feigenbaum Universality, Chaos and Fractal Dimensions in Discrete Dynamical Systems
Authors: T. K. Dutta, K. K. Das, N. Dutta
Abstract:
The salient feature of this paper is primarily concerned with Ricker’s population model: f(x)=x e^(r(1-x/k)), where r is the control parameter and k is the carrying capacity, and some fruitful results are obtained with the following objectives: 1) Determination of bifurcation values leading to a chaotic region, 2) Development of Statistical Methods and Analysis required for the measure of Fractal dimensions, 3) Calculation of various fractal dimensions. These results also help that the invariant probability distribution on the attractor, when it exists, provides detailed information about the long-term behavior of a dynamical system. At the end, some open problems are posed for further research.Keywords: Feigenbaum universality, chaos, Lyapunov exponent, fractal dimensions
Procedia PDF Downloads 3003344 The Benefits of Regional Brand for Companies
Authors: H. Starzyczna, M. Stoklasa, K. Matusinska
Abstract:
This article deals with the benefits of regional brands for companies in the Czech Republic. Research was focused on finding out the expected and actual benefits of regional brands for companies. The data were obtained by questionnaire survey and analysed by IBM SPSS. Representative sample of 204 companies was created. The research analysis disclosed the expected benefits that the regional brand should bring to companies. But the actual benefits are much worse. The statistical testing of hypotheses revealed that the benefits depend on the region of origin, which surprised both us and the regional coordinators.Keywords: Brand, regional brands, product protective branding programs, brand benefits
Procedia PDF Downloads 3433343 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis
Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc
Abstract:
Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation
Procedia PDF Downloads 2133342 Opportunities of an Industrial City in the Leisure Tourism
Authors: E. Happ, A. Albert Tóth
Abstract:
The aim of the research is to investigate the forms of the demands of leisure tourism in a West-Hungarian industrial city, Győr. Today, Győr is still a traditional industrial city, its industry is mainly based on vehicle industry, but the role of tourism is increasing in the life of the city as well. Because of the industrial nature and the strong economy of the city, the ratio of business tourists is high. It can be stated that MICE tourism is dominating in Győr. Developments of the last decade can help the city with new tourism products to increase the leisure tourism. The new types of tourism – besides business tourism – can help the providers to increase the occupancy rates and the demand at the weekends. The research demonstrates the theoretical background of the topic, and it shows the present situation of the tourism in Győr with secondary data. The secondary research contains statistical data from the Hungarian Statistical Office and the city council, and it is based on the providers’ data. The next part of the paper shows the potential types of leisure tourism with the help of primary research. The primary research contains the results of an online questionnaire with a sample of 1000 potential customers. It is completed with 10 in-depth interviews with tourism experts, who explained their opinions about the opportunities of leisure tourism in Győr from the providers’ side. The online questionnaire was filled out in spring 2017 by customers, who have already stayed in Győr or plan to visit the city. At the same time in-depth interviews were made with hotel managers, head of touristic institutions and employees at the council. Based on the research it can be stated that the touristic supply of Győr allows the increase of the leisure tourism ratio in the city. Primarily, the cultural and health tourism show potential development, but the supply side of touristic services can be developed in order to increase the number of guest nights. The tourism marketing needs to be strengthened in the city, and a distinctive marketing activity - from other cities - is needed as well. To conclude, although Győr is an industrial city, it has a transforming industrial part, and tourism is also strongly present in its economy. Besides the leading role of business tourism, different types of leisure tourism have the opportunity to take place in the city.Keywords: business tourism, Győr, industrial city, leisure tourism, touristic demand
Procedia PDF Downloads 2783341 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 3583340 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 2693339 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 1383338 Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach
Authors: Seyed Habib A. Rahmati, Mohsen Sadegh Amalnick
Abstract:
Different terms of the statistical process control (SPC) has sketch in the fuzzy environment. However, measurement system analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.Keywords: measurement, SPC, MSA, gauge capability (Cg and Cgk)
Procedia PDF Downloads 6483337 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: landsat 8, oligotrophic lake, remote sensing, water quality
Procedia PDF Downloads 3953336 The Effectiveness of the Family-Centered Sensory and Motor Interactive Games Program on Strengthening the Developmental and Motor Skills of Children aged 12 to 24 Months Who Have a Prior History of Low Birth Weight
Authors: Seyede Soraya Alavinezhad, Gholam Ali Afrooz, Seyedsaeid Sajjadianari
Abstract:
The purpose of this study was to assess the efficacy of a family-centered sensory and motor interactive activities program in enhancing the motor and developmental abilities of infants between the ages of 12 and 24 months who have a medical history of low birth weight. The design of the study was a combined method (qualitative and quantitative). The statistical population comprised infants between the ages of 12 and 24 months who had a documented history of low birth weight in Tehran in 2022. The study sample comprised twenty-eight infants, ranging in age from twelve to twenty-four months, whose mothers were selected using a readily available sampling method. The participants were allocated into two groups—experimental and control—at random. The Children's Developmental Screening Scale, the third edition of Ages and Stages Questionnaires (ASQ3TM), was utilized in both cohorts. Two sessions of the family-centered program for mothers and sixteen sessions for children in the experimental group were taken into account. The statistical analysis software SPSS version 26 was utilized to analyze the data. Initially, the descriptive analysis of the variables, the normality of the assumptions, and the equality of the variance of the variables in the groups were examined. Subsequently, univariate analysis of covariance was employed to examine research hypotheses. The results of the covariance analysis demonstrated that the family-centered interactive activities program for sensory and motor development was effective. A significant difference has been observed between the experimental and control groups with regard to developmental skills between the pre-test and post-test (P<0.005). Motor and developmental skills among children aged 12 to 24 months with a history of low birth weight can be enhanced through entertainment programs that incorporate suitable structure, according to the findings of this study. It is recommended that future research investigate the efficacy of this program on children of average weight and conduct longitudinal studies.Keywords: children, developmental skills, low birth weight, sensory and motor interactive games program
Procedia PDF Downloads 173335 Spatiotemporal Evaluation of Climate Bulk Materials Production in Atmospheric Aerosol Loading
Authors: Mehri Sadat Alavinasab Ashgezari, Gholam Reza Nabi Bidhendi, Fatemeh Sadat Alavinasab Ashkezari
Abstract:
Atmospheric aerosol loading (AAL) from anthropogenic sources is an evidence in industrial development. The accelerated trends in material consumption at the global scale in recent years demonstrate consumption paradigms sensible to the planetary boundaries (PB). This paper is a statistical approach on recognizing the path of climate-relevant bulk materials production (CBMP) of steel, cement and plastics to AAL via an updated and validated spatiotemporal distribution. The methodology of statistical analysis used the most updated regional or global databases or instrumental technologies. This corresponded to a selection of processes and areas capable for tracking AAL within the last decade, analyzing the most validated data while leading to explore the behavior functions or models. The results also represented a correlation within socio economic metabolism idea between the materials specified as macronutrients of society and AAL as a PB with an unknown threshold. The selected country contributors of China, India, US and the sample country of Iran show comparable cumulative AAL values vs to the bulk materials domestic extraction and production rate in the study period of 2012 to 2022. Generally, there is a tendency towards gradual descend in the worldwide and regional aerosol concentration after 2015. As of our evaluation, a considerable share of human role, equivalent 20% from CBMP, is for the main anthropogenic species of aerosols, including sulfate, black carbon and organic particulate matters too. This study, in an innovative approach, also explores the potential role of AAL control mechanisms from the economy sectors where ordered and smoothing loading trends are accredited through the disordered phenomena of CBMP and aerosol precursor emissions. The equilibrium states envisioned is an approval to the well-established theory of Spin Glasses applicable in physical system like the Earth and here to AAL.Keywords: atmospheric aeroso loading, material flows, climate bulk materials, industrial ecology
Procedia PDF Downloads 793334 3D Multimedia Model for Educational Design Engineering
Authors: Mohanaad Talal Shakir
Abstract:
This paper tries to propose educational design by using multimedia technology for Engineering of computer Technology, Alma'ref University College in Iraq. This paper evaluates the acceptance, cognition, and interactiveness of the proposed model by students by using the statistical relationship to determine the stage of the model. Objectives of proposed education design are to develop a user-friendly software for education purposes using multimedia technology and to develop animation for 3D model to simulate assembling and disassembling process of high-speed flow.Keywords: CAL, multimedia, shock tunnel, interactivity, engineering education
Procedia PDF Downloads 6203333 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)
Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh
Abstract:
Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.Keywords: climate change, dendroclimatology, hyrcanian forest, beech
Procedia PDF Downloads 1033332 Websites for Hypothesis Testing
Authors: Frantisek Mosna
Abstract:
E-learning has become an efficient and widespread means in process of education at all branches of human activities. Statistics is not an exception. Unfortunately the main focus in the statistics teaching is usually paid to the substitution to formulas. Suitable web-sites can simplify and automate calculation and provide more attention and time to the basic principles of statistics, mathematization of real-life situations and following interpretation of results. We introduce our own web-sites for hypothesis testing. Their didactic aspects, technical possibilities of individual tools for their creating, experience and advantages or disadvantages of them are discussed in this paper. These web-sites do not substitute common statistical software but significantly improve the teaching of the statistics at universities.Keywords: e-learning, hypothesis testing, PHP, web-sites
Procedia PDF Downloads 4203331 Case Study: Throughput Analysis over PLC Infrastructure as Last Mile Residential Solution in Colombia
Authors: Edward P. Guillen, A. Karina Martinez Barliza
Abstract:
Powerline Communications (PLC) as last mile solution to provide communication services, has the advantage of transmitting over channels already used for electrical distribution. However these channels have been not designed with this purpose, for that reason telecommunication companies in Colombia want to know how good would be using PLC in costs and network performance in comparison to cable modem or DSL. This paper analyzes PLC throughput for residential complex scenarios using a PLC network scenarios and some statistical results are shown.Keywords: home network, power line communication, throughput analysis, power factor, cost, last mile solution
Procedia PDF Downloads 2653330 Parameter Estimation via Metamodeling
Authors: Sergio Haram Sarmiento, Arcady Ponosov
Abstract:
Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels
Procedia PDF Downloads 5163329 Setting Control Limits For Inaccurate Measurements
Authors: Ran Etgar
Abstract:
The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X ̅-chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter (Y ̅) is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.Keywords: quality control, process control, round-off, measurement, rounding error
Procedia PDF Downloads 97