Search results for: forecasting accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4000

Search results for: forecasting accuracy

2560 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 24
2559 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 225
2558 Rangeland Monitoring by Computerized Technologies

Authors: H. Arzani, Z. Arzani

Abstract:

Every piece of rangeland has a different set of physical and biological characteristics. This requires the manager to synthesis various information for regular monitoring to define changes trend to get wright decision for sustainable management. So range managers need to use computerized technologies to monitor rangeland, and select. The best management practices. There are four examples of computerized technologies that can benefit sustainable management: (1) Photographic method for cover measurement: The method was tested in different vegetation communities in semi humid and arid regions. Interpretation of pictures of quadrats was done using Arc View software. Data analysis was done by SPSS software using paired t test. Based on the results, generally, photographic method can be used to measure ground cover in most vegetation communities. (2) GPS application for corresponding ground samples and satellite pixels: In two provinces of Tehran and Markazi, six reference points were selected and in each point, eight GPS models were tested. Significant relation among GPS model, time and location with accuracy of estimated coordinates was found. After selection of suitable method, in Markazi province coordinates of plots along four transects in each 6 sites of rangelands was recorded. The best time of GPS application was in the morning hours, Etrex Vista had less error than other models, and a significant relation among GPS model, time and location with accuracy of estimated coordinates was found. (3) Application of satellite data for rangeland monitoring: Focusing on the long term variation of vegetation parameters such as vegetation cover and production is essential. Our study in grass and shrub lands showed that there were significant correlations between quantitative vegetation characteristics and satellite data. So it is possible to monitor rangeland vegetation using digital data for sustainable utilization. (4) Rangeland suitability classification with GIS: Range suitability assessment can facilitate sustainable management planning. Three sub-models of sensitivity to erosion, water suitability and forage production out puts were entered to final range suitability classification model. GIS was facilitate classification of range suitability and produced suitability maps for sheep grazing. Generally digital computers assist range managers to interpret, modify, calibrate or integrating information for correct management.

Keywords: computer, GPS, GIS, remote sensing, photographic method, monitoring, rangeland ecosystem, management, suitability, sheep grazing

Procedia PDF Downloads 347
2557 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 317
2556 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 21
2555 Close Loop Controlled Current Nerve Locator

Authors: H. A. Alzomor, B. K. Ouda, A. M. Eldeib

Abstract:

Successful regional anesthesia depends upon precise location of the peripheral nerve or nerve plexus. Locating peripheral nerves is preferred to be done using nerve stimulation. In order to generate a nerve impulse by electrical means, a minimum threshold stimulus of current “rheobase” must be applied to the nerve. The technique depends on stimulating muscular twitching at a close distance to the nerve without actually touching it. Success rate of this operation depends on the accuracy of current intensity pulses used for stimulation. In this paper, we will discuss a circuit and algorithm for closed loop control for the current, theoretical analysis and test results and compare them with previous techniques.

Keywords: Close Loop Control (CLC), constant current, nerve locator, rheobase

Procedia PDF Downloads 240
2554 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 373
2553 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 255
2552 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 119
2551 Anisotropic Approach for Discontinuity Preserving in Optical Flow Estimation

Authors: Pushpendra Kumar, Sanjeev Kumar, R. Balasubramanian

Abstract:

Estimation of optical flow from a sequence of images using variational methods is one of the most successful approach. Discontinuity between different motions is one of the challenging problem in flow estimation. In this paper, we design a new anisotropic diffusion operator, which is able to provide smooth flow over a region and efficiently preserve discontinuity in optical flow. This operator is designed on the basis of intensity differences of the pixels and isotropic operator using exponential function. The combination of these are used to control the propagation of flow. Experimental results on the different datasets verify the robustness and accuracy of the algorithm and also validate the effect of anisotropic operator in the discontinuity preserving.

Keywords: optical flow, variational methods, computer vision, anisotropic operator

Procedia PDF Downloads 860
2550 The Two-Lane Rural Analysis and Comparison of Police Statistics and Results with the Help IHSDM

Authors: S. Amanpour, F. Mohamadian, S. A. Tabatabai

Abstract:

With the number of accidents and fatalities in recent years can be concluded that Iran is the status of road accidents, remains in a crisis. Investigate the causes of such incidents in all countries is a necessity. By doing this research, the results of the number and type of accidents and the location of the crash will be available. It is possible to prioritize economic and rational solutions to fix the flaws in the way of short-term the results are all the more strict rules about the desire to have black spots and cursory glance at the change of but results in long-term are desired to change the system or increase the width of the path or add extra track. In general, the relationship between the analysis of the accidents and near police statistics is the number of accidents in one year. This could prove the accuracy of the analysis done.

Keywords: traffic, IHSDM, crash, modeling, Khuzestan

Procedia PDF Downloads 267
2549 Fractional Order Differentiator Using Chebyshev Polynomials

Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh Kumar Pandey

Abstract:

A discrete time fractional orderdifferentiator has been modeled for estimating the fractional order derivatives of contaminated signal. The proposed approach is based on Chebyshev’s polynomials. We use the Riemann-Liouville fractional order derivative definition for designing the fractional order SG differentiator. In first step we calculate the window weight corresponding to the required fractional order. Then signal is convoluted with this calculated window’s weight for finding the fractional order derivatives of signals. Several signals are considered for evaluating the accuracy of the proposed method.

Keywords: fractional order derivative, chebyshev polynomials, signals, S-G differentiator

Procedia PDF Downloads 636
2548 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 42
2547 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,

Procedia PDF Downloads 260
2546 Estimation of Service Quality and Its Impact on Market Share Using Business Analytics

Authors: Haritha Saranga

Abstract:

Service quality has become an important driver of competition in manufacturing industries of late, as many products are being sold in conjunction with service offerings. With increase in computational power and data capture capabilities, it has become possible to analyze and estimate various aspects of service quality at the granular level and determine their impact on business performance. In the current study context, dealer level, model-wise warranty data from one of the top two-wheeler manufacturers in India is used to estimate service quality of individual dealers and its impact on warranty related costs and sales performance. We collected primary data on warranty costs, number of complaints, monthly sales, type of quality upgrades, etc. from the two-wheeler automaker. In addition, we gathered secondary data on various regions in India, such as petrol and diesel prices, geographic and climatic conditions of various regions where the dealers are located, to control for customer usage patterns. We analyze this primary and secondary data with the help of a variety of analytics tools such as Auto-Regressive Integrated Moving Average (ARIMA), Seasonal ARIMA and ARIMAX. Study results, after controlling for a variety of factors, such as size, age, region of the dealership, and customer usage pattern, show that service quality does influence sales of the products in a significant manner. A more nuanced analysis reveals the dynamics between product quality and service quality, and how their interaction affects sales performance in the Indian two-wheeler industry context. We also provide various managerial insights using descriptive analytics and build a model that can provide sales projections using a variety of forecasting techniques.

Keywords: service quality, product quality, automobile industry, business analytics, auto-regressive integrated moving average

Procedia PDF Downloads 109
2545 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 355
2544 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom

Authors: Michael Hast

Abstract:

Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.

Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight

Procedia PDF Downloads 181
2543 Calculation of Lungs Physiological Lung Motion in External Lung Irradiation

Authors: Yousif Mohamed Y. Abdallah, Khalid H. Eltom

Abstract:

This is an experimental study deals with measurement of the periodic physiological organ motion during lung external irradiation in order to reduce the exposure of healthy tissue during radiation treatments. The results showed for left lung displacement reading (4.52+1.99 mm) and right lung is (8.21+3.77 mm) which the radiotherapy physician should take suitable countermeasures in case of significant errors. The motion ranged between 2.13 mm and 12.2 mm (low and high). In conclusion, the calculation of tumour mobility can improve the accuracy of target areas definition in patients undergo Sterostatic RT for stage I, II and III lung cancer (NSCLC). Definition of the target volume based on a high resolution CT scan with a margin of 3-5 mm is appropriate.

Keywords: physiological motion, lung, external irradiation, radiation medicine

Procedia PDF Downloads 399
2542 On-Road Text Detection Platform for Driver Assistance Systems

Authors: Guezouli Larbi, Belkacem Soundes

Abstract:

The automation of the text detection process can help the human in his driving task. Its application can be very useful to help drivers to have more information about their environment by facilitating the reading of road signs such as directional signs, events, stores, etc. In this paper, a system consisting of two stages has been proposed. In the first one, we used pseudo-Zernike moments to pinpoint areas of the image that may contain text. The architecture of this part is based on three main steps, region of interest (ROI) detection, text localization, and non-text region filtering. Then, in the second step, we present a convolutional neural network architecture (On-Road Text Detection Network - ORTDN) which is considered a classification phase. The results show that the proposed framework achieved ≈ 35 fps and an mAP of ≈ 90%, thus a low computational time with competitive accuracy.

Keywords: text detection, CNN, PZM, deep learning

Procedia PDF Downloads 70
2541 Curve Fitting by Cubic Bezier Curves Using Migrating Birds Optimization Algorithm

Authors: Mitat Uysal

Abstract:

A new met heuristic optimization algorithm called as Migrating Birds Optimization is used for curve fitting by rational cubic Bezier Curves. This requires solving a complicated multivariate optimization problem. In this study, the solution of this optimization problem is achieved by Migrating Birds Optimization algorithm that is a powerful met heuristic nature-inspired algorithm well appropriate for optimization. The results of this study show that the proposed method performs very well and being able to fit the data points to cubic Bezier Curves with a high degree of accuracy.

Keywords: algorithms, Bezier curves, heuristic optimization, migrating birds optimization

Procedia PDF Downloads 322
2540 Engineering Optimization of Flexible Energy Absorbers

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Elastic energy absorbers which consist of a ring-liked plate and springs can be a good choice for increasing the impact duration during an accident. In the current project, an energy absorber system is optimized using four optimizing methods Kuhn-Tucker, Sequential Linear Programming (SLP), Concurrent Subspace Design (CSD), and Pshenichny-Lim-Belegundu-Arora (PLBA). Time solution, convergence, Programming Length and accuracy of the results were considered to find the best solution algorithm. Results showed the superiority of PLBA over the other algorithms.

Keywords: Concurrent Subspace Design (CSD), Kuhn-Tucker, Pshenichny-Lim-Belegundu-Arora (PLBA), Sequential Linear Programming (SLP)

Procedia PDF Downloads 385
2539 Detection of Chaos in General Parametric Model of Infectious Disease

Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari

Abstract:

Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.

Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test

Procedia PDF Downloads 311
2538 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development

Authors: Sreto Boljevic

Abstract:

In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.

Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES

Procedia PDF Downloads 187
2537 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9

Authors: Ulrich Wake, Eniman Syamsuddin

Abstract:

The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights

Keywords: ​ One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation

Procedia PDF Downloads 191
2536 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 198
2535 Modeling and Validation of Microspheres Generation in the Modified T-Junction Device

Authors: Lei Lei, Hongbo Zhang, Donald J. Bergstrom, Bing Zhang, K. Y. Song, W. J. Zhang

Abstract:

This paper presents a model for a modified T-junction device for microspheres generation. The numerical model is developed using a commercial software package: COMSOL Multiphysics. In order to test the accuracy of the numerical model, multiple variables, such as the flow rate of cross-flow, fluid properties, structure, and geometry of the microdevice are applied. The results from the model are compared with the experimental results in the diameter of the microsphere generated. The comparison shows a good agreement. Therefore the model is useful in further optimization of the device and feedback control of microsphere generation if any.

Keywords: CFD modeling, validation, microsphere generation, modified T-junction

Procedia PDF Downloads 686
2534 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 44
2533 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 105
2532 A Comparative Study of Deep Learning Methods for COVID-19 Detection

Authors: Aishrith Rao

Abstract:

COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset.

Keywords: deep learning, computer vision, radiology, COVID-19, ResNet, VGG-19, deep neural networks

Procedia PDF Downloads 143
2531 Element-Independent Implementation for Method of Lagrange Multipliers

Authors: Gil-Eon Jeong, Sung-Kie Youn, K. C. Park

Abstract:

Treatment for the non-matching interface is an important computational issue. To handle this problem, the method of Lagrange multipliers including classical and localized versions are the most popular technique. It essentially imposes the interface compatibility conditions by introducing Lagrange multipliers. However, the numerical system becomes unstable and inefficient due to the Lagrange multipliers. The interface element-independent formulation that does not include the Lagrange multipliers can be obtained by modifying the independent variables mathematically. Through this modification, more efficient and stable system can be achieved while involving equivalent accuracy comparing with the conventional method. A numerical example is conducted to verify the validity of the presented method.

Keywords: element-independent formulation, interface coupling, methods of Lagrange multipliers, non-matching interface

Procedia PDF Downloads 395