Search results for: prediction interval
1966 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 4061965 Predictive Factors of Exercise Behaviors of Junior High School Students in Chonburi Province
Authors: Tanida Julvanichpong
Abstract:
Exercise has been regarded as a necessary and important aspect to enhance physical performance and psychology health. Body weight statistics of students in junior high school students in Chonburi Province beyond a standard risk of obesity. Promoting exercise among Junior high school students in Chonburi Province, essential knowledge concerning factors influencing exercise is needed. Therefore, this study aims to (1) determine the levels of perceived exercise behavior, exercise behavior in the past, perceived barriers to exercise, perceived benefits of exercise, perceived self-efficacy to exercise, feelings associated with exercise behavior, influence of the family to exercise, influence of friends to exercise, and the perceived influence of the environment on exercise. (2) examine the predicting ability of each of the above factors while including personal factors (sex, educational level) for exercise behavior. Pender’s Health Promotion Model was used as a guide for the study. Sample included 652 students in junior high schools, Chonburi Provience. The samples were selected by Multi-Stage Random Sampling. Data Collection has been done by using self-administered questionnaires. Data were analyzed using descriptive statistics, Pearson’s product moment correlation coefficient, Eta, and stepwise multiple regression analysis. The research results showed that: 1. Perceived benefits of exercise, influence of teacher, influence of environmental, feelings associated with exercise behavior were at a high level. Influence of the family to exercise, exercise behavior, exercise behavior in the past, perceived self-efficacy to exercise and influence of friends were at a moderate level. Perceived barriers to exercise were at a low level. 2. Exercise behavior was positively significant related to perceived benefits of exercise, influence of the family to exercise, exercise behavior in the past, perceived self-efficacy to exercise, influence of friends, influence of teacher, influence of environmental and feelings associated with exercise behavior (p < .01, respectively) and was negatively significant related to educational level and perceived barriers to exercise (p < .01, respectively). Exercise behavior was significant related to sex (Eta = 0.243, p=.000). 3. Exercise behavior in the past, influence of the family to exercise significantly contributed 60.10 percent of the variance to the prediction of exercise behavior in male students (p < .01). Exercise behavior in the past, perceived self-efficacy to exercise, perceived barriers to exercise, and educational level significantly contributed 52.60 percent of the variance to the prediction of exercise behavior in female students (p < .01).Keywords: predictive factors, exercise behaviors, Junior high school, Chonburi Province
Procedia PDF Downloads 6181964 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 1331963 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches
Authors: Shani Brathwaite, Deborah Villarroel-Lamb
Abstract:
Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.Keywords: beach porosity, empirical models, infiltration, swash, wave run-up
Procedia PDF Downloads 3571962 Modeling and Prediction of Zinc Extraction Efficiency from Concentrate by Operating Condition and Using Artificial Neural Networks
Authors: S. Mousavian, D. Ashouri, F. Mousavian, V. Nikkhah Rashidabad, N. Ghazinia
Abstract:
PH, temperature, and time of extraction of each stage, agitation speed, and delay time between stages effect on efficiency of zinc extraction from concentrate. In this research, efficiency of zinc extraction was predicted as a function of mentioned variable by artificial neural networks (ANN). ANN with different layer was employed and the result show that the networks with 8 neurons in hidden layer has good agreement with experimental data.Keywords: zinc extraction, efficiency, neural networks, operating condition
Procedia PDF Downloads 5471961 Presence of Nesting Parrot (Psittacula krameri borealis) Order Psitaciforme, Family Psittacidea in District Mirpurkhas Sindh Pakistan
Authors: Aisha Liaquat Ali, Ghulam Sarwar Gachal, Muhammad Yusuf Sheikh
Abstract:
The parrot (Psittacula krameri borealis) commonly known as ‘Tota’ belongs to the order ‘Psittaciformes’ and family ‘Psittacidea’. Its range inhabits tropical to subtropical regions. The parrot (Psittacula krameri borealis) has been categorized as least concern species. The core aim of the present study is to investigate the nesting of parrot (Psittacula krameri borealis); site observation was done a different interval from various adjoining areas of District Mirpurkhas from June 2017 to May 2018. During the study period, altogether 15 nests were observed, number of nests were high in June’s month (33.3%), July (13.3%), August (20.0 %), March (13.3%), April (13.3%) while the lowest number of nest was observed in September’s month (6.6%) and the nest was absent from October to February. It investigates that the number of nests was high June’s month when temperature range between '20 °C' and '45 °C'.Keywords: District Mirpurkhas Sindh Pakistan, nesting, parrot (Psittacula krameri), presence
Procedia PDF Downloads 1661960 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 2871959 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 3241958 Pollutant Loads of Urban Runoff from a Mixed Residential-Commercial Catchment
Authors: Carrie Ho, Tan Yee Yong
Abstract:
Urban runoff quality for a mixed residential-commercial land use catchment in Miri, Sarawak was investigated for three storm events in 2011. Samples from the three storm events were tested for five water quality parameters, Namely, TSS, COD, BOD5, TP, and Pb. Concentration of the pollutants were found to vary significantly between storms, but were generally influenced by the length of antecedent dry period and the strength of rainfall intensities. Runoff from the study site showed a significant level of pollution for all the parameters investigated. Based on the National Water Quality Standards for Malaysia (NWQS), stormwater quality from the study site was polluted and exceeded class III water for TSS and BOD5 with maximum EMCs of 177 and 24 mg/L, respectively. Design pollutant load based on a design storm of 3-month average recurrence interval (ARI) for TSS, COD, BOD5, TP, and Pb were estimated to be 40, 9.4, 5.4, 1.7, and 0.06 kg/ha, respectively. The design pollutant load for the pollutants can be used to estimate loadings from similar catchments within Miri City.Keywords: mixed land-use, urban runoff, pollutant load, national water quality
Procedia PDF Downloads 3331957 Effect of Non-Genetic Factors and Heritability Estimate of Some Productive and Reproductive Traits of Holstein Cows in Middle of Iraq
Authors: Salim Omar Raoof
Abstract:
This study was conducted at the Al-Salam cows’ station for milk production located in Al-Latifiya district - Al-Mahmudiyah district (25 km south of Baghdad governorate) on a sample of (180) Holstein cows imported from Germany by Taj Al-Nahrain company in order to study the effect of the sequence, season and calving year on Total Milk Production (TMP). The lactation period (LP), calving interval, Services per conception and the estimate of the heritability of the studied traits. The results showed that the overall mean of TMP and LP were 3172.53 kg and 237.09-day respectively. The parity effect on TMP in Holstein cows was highly significant (P≤0.01). Total Milk production increased with the advance of parity and mostly reached its maximum value in the 4th and 3rd parity being 3305.87 kg and3286.35 kg per day, respectively. Season of calving has a highly significant (P≤0.01), effect on (TMP). Cows calved in spring had a highest milk production than those calved in other seasons. Season of calving had a highly significant (P≤0.01) effect on services per conception. The result of the study showed the heritability values for TMP, LP, SPC and CL were 0.21, 0.08, 0.08 and 0.07, respectively.Keywords: cows, non genetic, milk production, heritability
Procedia PDF Downloads 801956 Keyloggers Prevention with Time-Sensitive Obfuscation
Authors: Chien-Wei Hung, Fu-Hau Hsu, Chuan-Sheng Wang, Chia-Hao Lee
Abstract:
Nowadays, the abuse of keyloggers is one of the most widespread approaches to steal sensitive information. In this paper, we propose an On-Screen Prompts Approach to Keyloggers (OSPAK) and its analysis, which is installed in public computers. OSPAK utilizes a canvas to cue users when their keystrokes are going to be logged or ignored by OSPAK. This approach can protect computers against recoding sensitive inputs, which obfuscates keyloggers with letters inserted among users' keystrokes. It adds a canvas below each password field in a webpage and consists of three parts: two background areas, a hit area and a moving foreground object. Letters at different valid time intervals are combined in accordance with their time interval orders, and valid time intervals are interleaved with invalid time intervals. It utilizes animation to visualize valid time intervals and invalid time intervals, which can be integrated in a webpage as a browser extension. We have tested it against a series of known keyloggers and also performed a study with 95 users to evaluate how easily the tool is used. Experimental results made by volunteers show that OSPAK is a simple approach.Keywords: authentication, computer security, keylogger, privacy, information leakage
Procedia PDF Downloads 1231955 Prediction of Turbulent Separated Flow in a Wind Tunel
Authors: Karima Boukhadia
Abstract:
In the present study, the subsonic flow in an asymmetrical diffuser was simulated numerically using code CFX 11.0 and its generator of grid ICEM CFD. Two models of turbulence were tested: K- ε and K- ω SST. The results obtained showed that the K- ε model singularly over-estimates the speed value close to the wall and that the K- ω SST model is qualitatively in good agreement with the experimental results of Buice and Eaton 1997. They also showed that the separation and reattachment of the fluid on the tilted wall strongly depends on its angle of inclination and that the length of the zone of separation increases with the angle of inclination of the lower wall of the diffuser.Keywords: asymmetric diffuser, separation, reattachment, tilt angle, separation zone
Procedia PDF Downloads 5771954 Prediction of Thermodynamic Properties of N-Heptane in the Critical Region
Authors: Sabrina Ladjama, Aicha Rizi, Azzedine Abbaci
Abstract:
In this work, we use the crossover model to formulate a comprehensive fundamental equation of state for the thermodynamic properties for several n-alkanes in the critical region that extends to the classical region. This equation of state is constructed on the basis of comparison of selected measurements of pressure-density-temperature data, isochoric and isobaric heat capacity. The model can be applied in a wide range of temperatures and densities around the critical point for n-heptane. It is found that the developed model represents most of the reliable experimental data accurately.Keywords: crossover model, critical region, fundamental equation, n-heptane
Procedia PDF Downloads 4761953 Atomistic Study of Structural and Phases Transition of TmAs Semiconductor, Using the FPLMTO Method
Authors: Rekab Djabri Hamza, Daoud Salah
Abstract:
We report first-principles calculations of structural and magnetic properties of TmAs compound in zinc blende(B3) and CsCl(B2), structures employing the density functional theory (DFT) within the local density approximation (LDA). We use the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the LMTART-MINDLAB code (Calculation). Results are given for lattice parameters (a), bulk modulus (B), and its first derivatives(B’) in the different structures NaCl (B1) and CsCl (B2). The most important result in this work is the prediction of the possibility of transition; from cubic rocksalt (NaCl)→ CsCl (B2) (32.96GPa) for TmAs. These results use the LDA approximation.Keywords: LDA, phase transition, properties, DFT
Procedia PDF Downloads 1201952 Mathematical Modeling of Drip Emitter Discharge of Trapezoidal Labyrinth Channel
Authors: N. Philipova
Abstract:
The influence of the geometric parameters of trapezoidal labyrinth channel on the emitter discharge is investigated in this work. The impact of the dentate angle, the dentate spacing, and the dentate height are studied among the geometric parameters of the labyrinth channel. Numerical simulations of the water flow movement are performed according to central cubic composite design using Commercial codes GAMBIT and FLUENT. Inlet pressure of the dripper is set up to be 1 bar. The objective of this paper is to derive a mathematical model of the emitter discharge depending on the dentate angle, the dentate spacing, the dentate height of the labyrinth channel. As a result, the obtained mathematical model is a second-order polynomial reporting 2-way interactions among the geometric parameters. The dentate spacing has the most important and positive influence on the emitter discharge, followed by the simultaneous impact of the dentate spacing and the dentate height. The dentate angle in the observed interval has no significant effect on the emitter discharge. The obtained model can be used as a basis for a future emitter design.Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model.
Procedia PDF Downloads 1851951 Effect of Non-Genetic Factors and Heritability Estimate of Some Productive and Reproductive Traits of Holstein Cows in Middle of Iraq
Authors: Salim Omar Raoof
Abstract:
This study was conducted at the Al-Salam cows’ station for milk production located in Al-Latifiya district - Al-Mahmudiyah district (25 km south of Baghdad governorate) on a sample of (180) Holstein cows imported from Germany by Taj Al-Nahrain company, in order to study the effect of the sequence, season and calving year on Total Milk Production (TMP). the lactation period (LP), calving interval, Services per conception and the estimate the heritability of the studied traits. The results showed that the overall mean of TMP and LP were 3172.53 kg and237.09-day respectively. The parity effect on TMP in Holstein cows was highly significant (P≤0.01). total Milk production increased with the advanced of parity and mostly reached its maximum value in the 4th and 3rd parity being 3305.87 kg and3286.35 kg per day, respectively. Season of calving has a highly significant (P≤0.01) effect on (TMP). Cows calved in spring had a highest milk production than that calved in other seasons. Season of calving had highly significant (P≤0.01) effect on services per conception. The result of the study showed the heritability value for TMP, LP, SPC and CL were 0.21 ,0.08 ,0.08 and 0.07 respectively.Keywords: Holstein, cows, milk production, non-genetic, hertability
Procedia PDF Downloads 681950 A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario
Authors: Vinod Kumar Jaysaval, Prateek Agarwal
Abstract:
Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain.Keywords: airborne radar, blind zone, clutter, probability of detection
Procedia PDF Downloads 4701949 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems
Authors: Jeh-Nan Pan, Chung-I Li
Abstract:
Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability
Procedia PDF Downloads 2621948 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 681947 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 821946 Determination of the Effective Economic and/or Demographic Indicators in Classification of European Union Member and Candidate Countries Using Partial Least Squares Discriminant Analysis
Authors: Esra Polat
Abstract:
Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists a classical Partial Least Squares Regression (PLSR) in which the dependent variable is a categorical one expressing the class membership of each observation. PLSDA can be applied in many cases when classical discriminant analysis cannot be applied. For example, when the number of observations is low and when the number of independent variables is high. When there are missing values, PLSDA can be applied on the data that is available. Finally, it is adapted when multicollinearity between independent variables is high. The aim of this study is to determine the economic and/or demographic indicators, which are effective in grouping the 28 European Union (EU) member countries and 7 candidate countries (including potential candidates Bosnia and Herzegovina (BiH) and Kosova) by using the data set obtained from database of the World Bank for 2014. Leaving the political issues aside, the analysis is only concerned with the economic and demographic variables that have the potential influence on country’s eligibility for EU entrance. Hence, in this study, both the performance of PLSDA method in classifying the countries correctly to their pre-defined groups (candidate or member) and the differences between the EU countries and candidate countries in terms of these indicators are analyzed. As a result of the PLSDA, the value of percentage correctness of 100 % indicates that overall of the 35 countries is classified correctly. Moreover, the most important variables that determine the statuses of member and candidate countries in terms of economic indicators are identified as 'external balance on goods and services (% GDP)', 'gross domestic savings (% GDP)' and 'gross national expenditure (% GDP)' that means for the 2014 economical structure of countries is the most important determinant of EU membership. Subsequently, the model validated to prove the predictive ability by using the data set for 2015. For prediction sample, %97,14 of the countries are correctly classified. An interesting result is obtained for only BiH, which is still a potential candidate for EU, predicted as a member of EU by using the indicators data set for 2015 as a prediction sample. Although BiH has made a significant transformation from a war-torn country to a semi-functional state, ethnic tensions, nationalistic rhetoric and political disagreements are still evident, which inhibit Bosnian progress towards the EU.Keywords: classification, demographic indicators, economic indicators, European Union, partial least squares discriminant analysis
Procedia PDF Downloads 2811945 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus
Procedia PDF Downloads 1861944 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 431943 Bounded Solution Method for Geometric Programming Problem with Varying Parameters
Authors: Abdullah Ali H. Ahmadini, Firoz Ahmad, Intekhab Alam
Abstract:
Geometric programming problem (GPP) is a well-known non-linear optimization problem having a wide range of applications in many engineering problems. The structure of GPP is quite dynamic and easily fit to the various decision-making processes. The aim of this paper is to highlight the bounded solution method for GPP with special reference to variation among right-hand side parameters. Thus this paper is taken the advantage of two-level mathematical programming problems and determines the solution of the objective function in a specified interval called lower and upper bounds. The beauty of the proposed bounded solution method is that it does not require sensitivity analyses of the obtained optimal solution. The value of the objective function is directly calculated under varying parameters. To show the validity and applicability of the proposed method, a numerical example is presented. The system reliability optimization problem is also illustrated and found that the value of the objective function lies between the range of lower and upper bounds, respectively. At last, conclusions and future research are depicted based on the discussed work.Keywords: varying parameters, geometric programming problem, bounded solution method, system reliability optimization
Procedia PDF Downloads 1341942 Classification of Germinatable Mung Bean by Near Infrared Hyperspectral Imaging
Authors: Kaewkarn Phuangsombat, Arthit Phuangsombat, Anupun Terdwongworakul
Abstract:
Hard seeds will not grow and can cause mold in sprouting process. Thus, the hard seeds need to be separated from the normal seeds. Near infrared hyperspectral imaging in a range of 900 to 1700 nm was implemented to develop a model by partial least squares discriminant analysis to discriminate the hard seeds from the normal seeds. The orientation of the seeds was also studied to compare the performance of the models. The model based on hilum-up orientation achieved the best result giving the coefficient of determination of 0.98, and root mean square error of prediction of 0.07 with classification accuracy was equal to 100%.Keywords: mung bean, near infrared, germinatability, hard seed
Procedia PDF Downloads 3051941 CFD Modeling of Pollutant Dispersion in a Free Surface Flow
Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Said, Hervé Bournot, Georges Le Palec
Abstract:
In this work, we determine the turbulent dynamic structure of pollutant dispersion in two-phase free surface flow. The numerical simulation was performed using ANSYS Fluent. The flow study is three-dimensional, unsteady and isothermal. The study area has been endowed with a rectangular obstacle to analyze its influence on the hydrodynamic variables and progression of the pollutant. The numerical results show that the hydrodynamic model provides prediction of the dispersion of a pollutant in an open channel flow and reproduces the recirculation and trapping the pollutant downstream near the obstacle.Keywords: CFD, free surface, polluant dispersion, turbulent flows
Procedia PDF Downloads 5471940 Frequency of Refractive Errors in Squinting Eyes of Children from 4 to 16 Years Presenting at Tertiary Care Hospital
Authors: Maryum Nawaz
Abstract:
Purpose: To determine the frequency of refractive errors in squinting eyes of children from 4 to 16 years presenting at tertiary care hospital. Study Design: A descriptive cross-sectional study was done. Place and Duration: The study was conducted in Pediatric Ophthalmology, Hayatabad Medical Complex, Peshawar. Materials and Methods: The sample size was 146 keeping 41.45%5 proportion of refractive errors in children with squinting eyes, 95% confidence interval and 8% margin of error under WHO sample size calculations. Non-probability consecutive sampling was done. Result: Mean age was 8.57±2.66 years. Male were 89 (61.0%) and female were 57 (39.0%). Refractive error was present in 56 (38.4%) and was not present in 90 (61.6%) of patients. There was no association of gender, age, parent refractive errors, or early usage of electric equipment with the refractive errors. Conclusion: There is a high prevalence of refractive errors in a patient with strabismus. There is no association of age, gender, parent refractive errors, or early usage of electric equipment in the occurrence of refractive errors. Further studies are recommended for confirmation of these.Keywords: strabismus, refractive error, myopia, hypermetropia, astigmatism
Procedia PDF Downloads 1451939 Simultaneous Saccharification and Co-Fermentation of Paddy Straw and Fruit Wastes into Ethanol Production
Authors: Kamla Malik
Abstract:
For ethanol production from paddy straw firstly pretreatment was done by using sodium hydroxide solution (2.0%) at 15 psi for 1 hr. The maximum lignin removal was achieved with 0.5 mm mesh size of paddy straw. It contained 72.4 % cellulose, 15.9% hemicelluloses and 2.0 % lignin after pretreatment. Paddy straw hydrolysate (PSH) with fruits wastes (5%), such as sweet lime, apple, sapota, grapes, kinnow, banana, papaya, mango, and watermelon were subjected to simultaneous saccharification and co-fermentation (SSCF) for 72 hrs by co-culture of Saccharomyces cerevisiae HAU-1 and Candida sp. with 0.3 % urea as a cheap nitrogen source. Fermentation was carried out at 35°C and determined ethanol yield at 24 hours interval. The maximum production of ethanol was produced within 72 hrs of fermentation in PSH + sapota peels (3.9% v/v) followed by PSH + kinnow peels (3.6%) and PSH+ papaya peels extract (3.1 %). In case of PSH+ banana peels and mango peel extract the ethanol produced were 2.8 % and 2.2 % (v/v). The results of this study suggest that wastes from fruits that contain fermentable sugar should not be discarded into our environment, but should be supplemented in paddy straw which converted to useful products like bio-ethanol that can serve as an alternative energy source.Keywords: ethanol, fermentation, fruit wastes, paddy straw
Procedia PDF Downloads 3901938 Prediction of Mental Health: Heuristic Subjective Well-Being Model on Perceived Stress Scale
Authors: Ahmet Karakuş, Akif Can Kilic, Emre Alptekin
Abstract:
A growing number of studies have been conducted to determine how well-being may be predicted using well-designed models. It is necessary to investigate the backgrounds of features in order to construct a viable Subjective Well-Being (SWB) model. We have picked the suitable variables from the literature on SWB that are acceptable for real-world data instructions. The goal of this work is to evaluate the model by feeding it with SWB characteristics and then categorizing the stress levels using machine learning methods to see how well it performs on a real dataset. Despite the fact that it is a multiclass classification issue, we have achieved significant metric scores, which may be taken into account for a specific task.Keywords: machine learning, multiclassification problem, subjective well-being, perceived stress scale
Procedia PDF Downloads 1331937 Analysis Of Non-uniform Characteristics Of Small Underwater Targets Based On Clustering
Authors: Tianyang Xu
Abstract:
Small underwater targets generally have a non-centrosymmetric geometry, and the acoustic scattering field of the target has spatial inhomogeneity under active sonar detection conditions. In view of the above problems, this paper takes the hemispherical cylindrical shell as the research object, and considers the angle continuity implied in the echo characteristics, and proposes a cluster-driven research method for the non-uniform characteristics of target echo angle. First, the target echo features are extracted, and feature vectors are constructed. Secondly, the t-SNE algorithm is used to improve the internal connection of the feature vector in the low-dimensional feature space and to construct the visual feature space. Finally, the implicit angular relationship between echo features is extracted under unsupervised condition by cluster analysis. The reconstruction results of the local geometric structure of the target corresponding to different categories show that the method can effectively divide the angle interval of the local structure of the target according to the natural acoustic scattering characteristics of the target.Keywords: underwater target;, non-uniform characteristics;, cluster-driven method;, acoustic scattering characteristics
Procedia PDF Downloads 134