Search results for: error metrices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1900

Search results for: error metrices

790 Social Responsibility in the Theory of Organisation Management

Authors: Patricia Crentsil, Alvina Oriekhova

Abstract:

The aim of the study is to determine social responsibility in the theory of organisation management. The main objectives are to examine the link between accountability,transparency, and ethical onorganisation management. The study seeks to answer questions that have received inadequate attention in social responsibility literature. Specifically, how accountability, transparency of policy, and ethical aspect enhanced organisation management? The target population of the study comprises of Deans and Head of Departments of Public Universities and Technical Universities in Ghana. The study used purposive sampling technique to select the Public Universities and technical universities in Ghana and adopted simple random Technique to select 300 participants from all Technical Universities in Ghana and 500 participants from all Traditional Universities in Ghana. The sample size will be 260 using confidence level = 95%, Margin of Error = 5%. The study used both primary and secondary data. The study adopted exploratory design to address the research questions. Results indicated thataccountability, transparency, and ethical have a positive significant link with organisation management. The study suggested that management can motivate an organization to act in a socially responsible manner.

Keywords: corporate social responsibility, organisation management, organisation management theory, social responsibility

Procedia PDF Downloads 125
789 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 266
788 Approach to Formulate Intuitionistic Fuzzy Regression Models

Authors: Liang-Hsuan Chen, Sheng-Shing Nien

Abstract:

This study aims to develop approaches to formulate intuitionistic fuzzy regression (IFR) models for many decision-making applications in the fuzzy environments using intuitionistic fuzzy observations. Intuitionistic fuzzy numbers (IFNs) are used to characterize the fuzzy input and output variables in the IFR formulation processes. A mathematical programming problem (MPP) is built up to optimally determine the IFR parameters. Each parameter in the MPP is defined as a couple of alternative numerical variables with opposite signs, and an intuitionistic fuzzy error term is added to the MPP to characterize the uncertainty of the model. The IFR model is formulated based on the distance measure to minimize the total distance errors between estimated and observed intuitionistic fuzzy responses in the MPP resolution processes. The proposed approaches are simple/efficient in the formulation/resolution processes, in which the sign of parameters can be determined so that the problem to predetermine the sign of parameters is avoided. Furthermore, the proposed approach has the advantage that the spread of the predicted IFN response will not be over-increased, since the parameters in the established IFR model are crisp. The performance of the obtained models is evaluated and compared with the existing approaches.

Keywords: fuzzy sets, intuitionistic fuzzy number, intuitionistic fuzzy regression, mathematical programming method

Procedia PDF Downloads 140
787 Robust Fractional Order Controllers for Minimum and Non-Minimum Phase Systems – Studies on Design and Development

Authors: Anand Kishore Kola, G. Uday Bhaskar Babu, Kotturi Ajay Kumar

Abstract:

The modern dynamic systems used in industries are complex in nature and hence the fractional order controllers have been contemplated as a fresh approach to control system design that takes the complexity into account. Traditional integer order controllers use integer derivatives and integrals to control systems, whereas fractional order controllers use fractional derivatives and integrals to regulate memory and non-local behavior. This study provides a method based on the maximumsensitivity (Ms) methodology to discover all resilient fractional filter Internal Model Control - proportional integral derivative (IMC-PID) controllers that stabilize the closed-loop system and deliver the highest performance for a time delay system with a Smith predictor configuration. Additionally, it helps to enhance the range of PID controllers that are used to stabilize the system. This study also evaluates the effectiveness of the suggested controller approach for minimum phase system in comparison to those currently in use which are based on Integral of Absolute Error (IAE) and Total Variation (TV).

Keywords: modern dynamic systems, fractional order controllers, maximum-sensitivity, IMC-PID controllers, Smith predictor, IAE and TV

Procedia PDF Downloads 66
786 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM

Authors: M. J. Davidson, N. Selvaraj, L. Venugopal

Abstract:

The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.

Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube

Procedia PDF Downloads 509
785 Artificial Intelligence in the Design of High-Strength Recycled Concrete

Authors: Hadi Rouhi Belvirdi, Davoud Beheshtizadeh

Abstract:

The increasing demand for sustainable construction materials has led to a growing interest in high-strength recycled concrete (HSRC). Utilizing recycled materials not only reduces waste but also minimizes the depletion of natural resources. This study explores the application of artificial intelligence (AI) techniques to model and predict the properties of HSRC. In the past two decades, the production levels in various industries and, consequently, the amount of waste have increased significantly. Continuing this trend will undoubtedly cause irreparable damage to the environment. For this reason, engineers have been constantly seeking practical solutions for recycling industrial waste in recent years. This research utilized the results of the compressive strength of 90-day high-strength recycled concrete. The method for creating recycled concrete involved replacing sand with crushed glass and using glass powder instead of cement. Subsequently, a feedforward artificial neural network was employed to model the compressive strength results for 90 days. The regression and error values obtained indicate that this network is suitable for modeling the compressive strength data.

Keywords: high-strength recycled concrete, feedforward artificial neural network, regression, construction materials

Procedia PDF Downloads 17
784 Turbulent Forced Convection of Cu-Water Nanofluid: CFD Models Comparison

Authors: I. Behroyan, P. Ganesan, S. He, S. Sivasankaran

Abstract:

This study compares the predictions of five types of Computational Fluid Dynamics (CFD) models, including two single-phase models (i.e. Newtonian and non-Newtonian) and three two-phase models (Eulerian-Eulerian, mixture and Eulerian-Lagrangian), to investigate turbulent forced convection of Cu-water nanofluid in a tube with a constant heat flux on the tube wall. The Reynolds (Re) number of the flow is between 10,000 and 25,000, while the volume fraction of Cu particles used is in the range of 0 to 2%. The commercial CFD package of ANSYS-Fluent is used. The results from the CFD models are compared with results from experimental investigations from literature. According to the results of this study, non-Newtonian single-phase model, in general, does not show a good agreement with Xuan and Li correlation in prediction of Nu number. Eulerian-Eulerian model gives inaccurate results expect for φ=0.5%. Mixture model gives a maximum error of 15%. Newtonian single-phase model and Eulerian-Lagrangian model, in overall, are the recommended models. This work can be used as a reference for selecting an appreciate model for future investigation. The study also gives a proper insight about the important factors such as Brownian motion, fluid behavior parameters and effective nanoparticle conductivity which should be considered or changed by the each model.

Keywords: heat transfer, nanofluid, single-phase models, two-phase models

Procedia PDF Downloads 484
783 Adaptive Anchor Weighting for Improved Localization with Levenberg-Marquardt Optimization

Authors: Basak Can

Abstract:

This paper introduces an iterative and weighted localization method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. The system employs locators, such as Gateways (GWs), to estimate and track the position of an End Node (EN). Performance is evaluated relative to the number of locators, with known locations determined through calibration. Performance evaluation is presented utilizing low cost single-antenna Bluetooth Low Energy (BLE) devices. The proposed approach can be applied to alternative Internet of Things (IoT) modulation schemes, as well as Ultra WideBand (UWB) or millimeter-wave (mmWave) based devices. In non-line-of-sight (NLOS) scenarios, using four or eight locators yields a 95th percentile localization performance of 2.2 meters and 1.5 meters, respectively, in a 4,305 square feet indoor area with BLE 5.1 devices. This method outperforms conventional RSSI-based techniques, achieving a 51% improvement with four locators and a 52 % improvement with eight locators. Future work involves modeling interference impact and implementing data curation across multiple channels to mitigate such effects.

Keywords: lateration, least squares, Levenberg-Marquardt algorithm, localization, path-loss, RMS error, RSSI, sensors, shadow fading, weighted localization

Procedia PDF Downloads 29
782 Investigation of Extreme Gradient Boosting Model Prediction of Soil Strain-Shear Modulus

Authors: Ehsan Mehryaar, Reza Bushehri

Abstract:

One of the principal parameters defining the clay soil dynamic response is the strain-shear modulus relation. Predicting the strain and, subsequently, shear modulus reduction of the soil is essential for performance analysis of structures exposed to earthquake and dynamic loadings. Many soil properties affect soil’s dynamic behavior. In order to capture those effects, in this study, a database containing 1193 data points consists of maximum shear modulus, strain, moisture content, initial void ratio, plastic limit, liquid limit, initial confining pressure resulting from dynamic laboratory testing of 21 clays is collected for predicting the shear modulus vs. strain curve of soil. A model based on an extreme gradient boosting technique is proposed. A tree-structured parzan estimator hyper-parameter tuning algorithm is utilized simultaneously to find the best hyper-parameters for the model. The performance of the model is compared to the existing empirical equations using the coefficient of correlation and root mean square error.

Keywords: XGBoost, hyper-parameter tuning, soil shear modulus, dynamic response

Procedia PDF Downloads 203
781 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming

Authors: Busaba Phurksaphanrat

Abstract:

This research proposes a pre-emptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of make-span. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, pre-emptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions.

Keywords: multi-mode resource constrained project scheduling problem, fuzzy set, goal programming, pre-emptive fuzzy goal programming

Procedia PDF Downloads 437
780 Maximum Deformation Estimation for Reinforced Concrete Buildings Using Equivalent Linearization Method

Authors: Chien-Kuo Chiu

Abstract:

In the displacement-based seismic design and evaluation, equivalent linearization method is one of the approximation methods to estimate the maximum inelastic displacement response of a system. In this study, the accuracy of two equivalent linearization methods are investigated. The investigation consists of three soil condition in Taiwan (Taipei Basin 1, 2, and 3) and five different heights of building (H_r= 10, 20, 30, 40, and 50 m). The first method is the Taiwan equivalent linearization method (TELM) which was proposed based on Japanese equivalent linear method considering the modification factor, α_T= 0.85. On the basis of Lin and Miranda study, the second method is proposed with some modification considering Taiwan soil conditions. From this study, it is shown that Taiwanese equivalent linearization method gives better estimation compared to the modified Lin and Miranda method (MLM). The error index for the Taiwanese equivalent linearization method are 16%, 13%, and 12% for Taipei Basin 1, 2, and 3, respectively. Furthermore, a ductility demand spectrum of single-degree-of-freedom (SDOF) system is presented in this study as a guide for engineers to estimate the ductility demand of a structure.

Keywords: displacement-based design, ductility demand spectrum, equivalent linearization method, RC buildings, single-degree-of-freedom

Procedia PDF Downloads 162
779 Optimal Sensing Technique for Estimating Stress Distribution of 2-D Steel Frame Structure Using Genetic Algorithm

Authors: Jun Su Park, Byung Kwan Oh, Jin Woo Hwang, Yousok Kim, Hyo Seon Park

Abstract:

For the structural safety, the maximum stress calculated from the stress distribution of a structure is widely used. The stress distribution can be estimated by deformed shape of the structure obtained from measurement. Although the estimation of stress is strongly affected by the location and number of sensing points, most studies have conducted the stress estimation without reasonable basis on sensing plan such as the location and number of sensors. In this paper, an optimal sensing technique for estimating the stress distribution is proposed. This technique proposes the optimal location and number of sensing points for a 2-D frame structure while minimizing the error of stress distribution between analytical model and estimation by cubic smoothing splines using genetic algorithm. To verify the proposed method, the optimal sensor measurement technique is applied to simulation tests on 2-D steel frame structure. The simulation tests are performed under various loading scenarios. Through those tests, the optimal sensing plan for the structure is suggested and verified.

Keywords: genetic algorithm, optimal sensing, optimizing sensor placements, steel frame structure

Procedia PDF Downloads 533
778 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: conduction, inverse problems, conjugated gradient method, laser

Procedia PDF Downloads 370
777 Implementation of Data Science in Field of Homologation

Authors: Shubham Bhonde, Nekzad Doctor, Shashwat Gawande

Abstract:

For the use and the import of Keys and ID Transmitter as well as Body Control Modules with radio transmission in a lot of countries, homologation is required. Final deliverables in homologation of the product are certificates. In considering the world of homologation, there are approximately 200 certificates per product, with most of the certificates in local languages. It is challenging to manually investigate each certificate and extract relevant data from the certificate, such as expiry date, approval date, etc. It is most important to get accurate data from the certificate as inaccuracy may lead to missing re-homologation of certificates that will result in an incompliance situation. There is a scope of automation in reading the certificate data in the field of homologation. We are using deep learning as a tool for automation. We have first trained a model using machine learning by providing all country's basic data. We have trained this model only once. We trained the model by feeding pdf and jpg files using the ETL process. Eventually, that trained model will give more accurate results later. As an outcome, we will get the expiry date and approval date of the certificate with a single click. This will eventually help to implement automation features on a broader level in the database where certificates are stored. This automation will help to minimize human error to almost negligible.

Keywords: homologation, re-homologation, data science, deep learning, machine learning, ETL (extract transform loading)

Procedia PDF Downloads 163
776 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation

Authors: Mohammad Anwar, Shah Waliullah

Abstract:

This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.

Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model

Procedia PDF Downloads 68
775 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading

Authors: Peter Shi

Abstract:

Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.

Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market

Procedia PDF Downloads 72
774 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation

Procedia PDF Downloads 350
773 Design of a Low Cost Programmable LED Lighting System

Authors: S. Abeysekera, M. Bazghaleh, M. P. L. Ooi, Y. C. Kuang, V. Kalavally

Abstract:

Smart LED-based lighting systems have significant advantages over traditional lighting systems due to their capability of producing tunable light spectrums on demand. The main challenge in the design of smart lighting systems is to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area. This paper outlines the programmable LED lighting system design principles of design to achieve the two aims. In this paper, a seven-channel design using low-cost discrete LEDs is presented. Optimization algorithms are used to calculate the number of required LEDs, LEDs arrangements and optimum LED separation distance. The results show the illumination uniformity for each channel. The results also show that the maximum color error is below 0.0808 on the CIE1976 chromaticity scale. In conclusion, this paper considered the simulation and design of a seven-channel programmable lighting system using low-cost discrete LEDs to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area.

Keywords: light spectrum control, LEDs, smart lighting, programmable LED lighting system

Procedia PDF Downloads 187
772 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level

Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar

Abstract:

Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.

Keywords: machine learning, hydro-gravimetry, ground water level, predictive model

Procedia PDF Downloads 127
771 Modelling the Long Rune of Aggregate Import Demand in Libya

Authors: Said Yousif Khairi

Abstract:

Being a developing economy, imports of capital, raw materials and manufactories goods are vital for sustainable economic growth. In 2006, Libya imported LD 8 billion (US$ 6.25 billion) which composed of mainly machinery and transport equipment (49.3%), raw material (18%), and food products and live animals (13%). This represented about 10% of GDP. Thus, it is pertinent to investigate factors affecting the amount of Libyan imports. An econometric model representing the aggregate import demand for Libya was developed and estimated using the bounds test procedure, which based on an unrestricted error correction model (UECM). The data employed for the estimation was from 1970–2010. The results of the bounds test revealed that the volume of imports and its determinants namely real income, consumer price index and exchange rate are co-integrated. The findings indicate that the demand for imports is inelastic with respect to income, index price level and The exchange rate variable in the short run is statistically significant. In the long run, the income elasticity is elastic while the price elasticity and the exchange rate remains inelastic. This indicates that imports are important elements for Libyan economic growth in the long run.

Keywords: import demand, UECM, bounds test, Libya

Procedia PDF Downloads 362
770 Experimental and Numerical Investigation on Delaminated Composite Plate

Authors: Sreekanth T. G., Kishorekumar S., Sowndhariya Kumar J., Karthick R., Shanmugasuriyan S.

Abstract:

Composites are increasingly being used in industries due to their unique properties, such as high specific stiffness and specific strength, higher fatigue and wear resistances, and higher damage tolerance capability. Composites are prone to failures or damages that are difficult to identify, locate, and characterize due to their complex design features and complicated loading conditions. The lack of understanding of the damage mechanism of the composites leads to the uncertainties in the structural integrity and durability. Delamination is one of the most critical failure mechanisms in laminated composites because it progressively affects the mechanical performance of fiber-reinforced polymer composite structures over time. The identification and severity characterization of delamination in engineering fields such as the aviation industry is critical for both safety and economic concerns. The presence of delamination alters the vibration properties of composites, such as natural frequencies, mode shapes, and so on. In this study, numerical analysis and experimental analysis were performed on delaminated and non-delaminated glass fiber reinforced polymer (GFRP) plate, and the numerical and experimental analysis results were compared, and error percentage has been found out.

Keywords: composites, delamination, natural frequency, mode shapes

Procedia PDF Downloads 108
769 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 544
768 M-Machine Assembly Scheduling Problem to Minimize Total Tardiness with Non-Zero Setup Times

Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi

Abstract:

Our objective is to minimize the total tardiness in an m-machine two-stage assembly flowshop scheduling problem. The objective is an important performance measure because of the fact that the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. In the literature, the problem is considered with zero setup times which may not be realistic and appropriate for some scheduling environments. Considering separate setup times from processing times increases machine utilization by decreasing the idle time and reduces total tardiness. We propose two new algorithms and adapt four existing algorithms in the literature which are different versions of simulated annealing and genetic algorithms. Moreover, a dominance relation is developed based on the mathematical formulation of the problem. The developed dominance relation is incorporated in our proposed algorithms. Computational experiments are conducted to investigate the performance of the newly proposed algorithms. We find that one of the proposed algorithms performs significantly better than the others, i.e., the error of the best algorithm is less than those of the other algorithms by minimum 50%. The newly proposed algorithm is also efficient for the case of zero setup times and performs better than the best existing algorithm in the literature.

Keywords: algorithm, assembly flowshop, scheduling, simulation, total tardiness

Procedia PDF Downloads 333
767 A Stochastic Volatility Model for Optimal Market-Making

Authors: Zubier Arfan, Paul Johnson

Abstract:

The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.

Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading

Procedia PDF Downloads 152
766 Tracking Filtering Algorithm Based on ConvLSTM

Authors: Ailing Yang, Penghan Song, Aihua Cai

Abstract:

The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods.

Keywords: maneuvering target, state estimation, Kalman filter, LSTM, self-attention

Procedia PDF Downloads 180
765 Real Time Implementation of Efficient DFIG-Variable Speed Wind Turbine Control

Authors: Fayssal Amrane, Azeddine Chaiba, Bruno Francois

Abstract:

In this paper, design and experimental study based on Direct Power Control (DPC) of DFIG is proposed for Stand-alone mode in Variable Speed Wind Energy Conversion System (VS-WECS). The proposed IDPC method based on robust IP (Integral-Proportional) controllers in order to control the Rotor Side Converter (RSC) by the means of the rotor current d-q axes components (Ird* and Irq*) of Doubly Fed Induction Generator (DFIG) through AC-DC-AC converter. The implementation is realized using dSPACE dS1103 card under Sub and Super-synchronous operations (means < and > of the synchronous speed “1500 rpm”). Finally, experimental results demonstrate that the proposed control using IP provides improved dynamic responses, and decoupled control of the wind turbine has driven DFIG with high performances (good reference tracking, short response time and low power error) despite for sudden variation of wind speed and rotor references currents.

Keywords: Direct Power Control (DPC), Doubly fed induction generator (DFIG), Wind Energy Conversion System (WECS), Experimental study.

Procedia PDF Downloads 126
764 Enhancing a Recidivism Prediction Tool with Machine Learning: Effectiveness and Algorithmic Fairness

Authors: Marzieh Karimihaghighi, Carlos Castillo

Abstract:

This work studies how Machine Learning (ML) may be used to increase the effectiveness of a criminal recidivism risk assessment tool, RisCanvi. The two key dimensions of this analysis are predictive accuracy and algorithmic fairness. ML-based prediction models obtained in this study are more accurate at predicting criminal recidivism than the manually-created formula used in RisCanvi, achieving an AUC of 0.76 and 0.73 in predicting violent and general recidivism respectively. However, the improvements are small, and it is noticed that algorithmic discrimination can easily be introduced between groups such as national vs foreigner, or young vs old. It is described how effectiveness and algorithmic fairness objectives can be balanced, applying a method in which a single error disparity in terms of generalized false positive rate is minimized, while calibration is maintained across groups. Obtained results show that this bias mitigation procedure can substantially reduce generalized false positive rate disparities across multiple groups. Based on these results, it is proposed that ML-based criminal recidivism risk prediction should not be introduced without applying algorithmic bias mitigation procedures.

Keywords: algorithmic fairness, criminal risk assessment, equalized odds, recidivism

Procedia PDF Downloads 152
763 Walmart Sales Forecasting using Machine Learning in Python

Authors: Niyati Sharma, Om Anand, Sanjeev Kumar Prasad

Abstract:

Assuming future sale value for any of the organizations is one of the major essential characteristics of tactical development. Walmart Sales Forecasting is the finest illustration to work with as a beginner; subsequently, it has the major retail data set. Walmart uses this sales estimate problem for hiring purposes also. We would like to analyzing how the internal and external effects of one of the largest companies in the US can walk out their Weekly Sales in the future. Demand forecasting is the planned prerequisite of products or services in the imminent on the basis of present and previous data and different stages of the market. Since all associations is facing the anonymous future and we do not distinguish in the future good demand. Hence, through exploring former statistics and recent market statistics, we envisage the forthcoming claim and building of individual goods, which are extra challenging in the near future. As a result of this, we are producing the required products in pursuance of the petition of the souk in advance. We will be using several machine learning models to test the exactness and then lastly, train the whole data by Using linear regression and fitting the training data into it. Accuracy is 8.88%. The extra trees regression model gives the best accuracy of 97.15%.

Keywords: random forest algorithm, linear regression algorithm, extra trees classifier, mean absolute error

Procedia PDF Downloads 149
762 Machine Learning Approach for Mutation Testing

Authors: Michael Stewart

Abstract:

Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.

Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing

Procedia PDF Downloads 201
761 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 272