Search results for: vector error correction model (VECM)
17217 The Impact of Artificial Intelligence on Qualty Conrol and Quality
Authors: Mary Moner Botros Fanawel
Abstract:
Many companies use the statistical tool named as statistical quality control, and which can have a high cost for the companies interested on these statistical tools. The evaluation of the quality of products and services is an important topic, but the reduction of the cost of the implantation of the statistical quality control also has important benefits for the companies. For this reason, it is important to implement a economic design for the various steps included into the statistical quality control. In this paper, we describe some relevant aspects related to the economic design of a quality control chart for the proportion of defective items. They are very important because the suggested issues can reduce the cost of implementing a quality control chart for the proportion of defective items. Note that the main purpose of this chart is to evaluate and control the proportion of defective items of a production process.Keywords: model predictive control, hierarchical control structure, genetic algorithm, water quality with DBPs objectives proportion, type I error, economic plan, distribution function bootstrap control limit, p-value method, out-of-control signals, p-value, quality characteristics
Procedia PDF Downloads 6517216 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 10517215 Rényi Entropy Correction to Expanding Universe
Authors: Hamidreza Fazlollahi
Abstract:
The Re ́nyi entropy comprises a group of data estimates that sums up the well-known Shannon entropy, acquiring a considerable lot of its properties. It appears as unqualified and restrictive entropy, relative entropy, or common data, and has found numerous applications in information theory. In the Re ́nyi’s argument, the area law of the black hole entropy plays a significant role. However, the total entropy can be modified by some quantum effects, motivated by the randomness of a system. In this note, by employing this modified entropy relation, we have derived corrections to Friedmann equations. Taking this entropy associated with the apparent horizon of the Friedmann-Robertson-Walker Universe and assuming the first law of thermodynamics, dE=T_A (dS)_A+WdV, satisfies the apparent horizon, we have reconsidered expanding Universe. Also, the second thermodynamics law has been examined.Keywords: Friedmann equations, dark energy, first law of thermodynamics, Reyni entropy
Procedia PDF Downloads 9917214 EarlyWarning for Financial Stress Events:A Credit-Regime Switching Approach
Abstract:
We propose a new early warning model for predicting financial stress events for a given future time. In this model, we examine whether credit conditions play an important role as a nonlinear propagator of shocks when predicting the likelihood of occurrence of financial stress events for a given future time. This propagation takes the form of a threshold regression in which a regime change occurs if credit conditions cross a critical threshold. Given the new early warning model for financial stress events, we evaluate the performance of this model and currently available alternatives, such as the model from signal extraction approach, and linear regression model. In-sample forecasting results indicate that the three types of models are useful tools for predicting financial stress events while none of them outperforms others across all criteria considered. The out-of-sample forecasting results suggest that the credit-regime switching model performs better than the two others across all criteria and all forecasting horizons considered.Keywords: cut-off probability, early warning model, financial crisis, financial stress, regime-switching model, forecasting horizons
Procedia PDF Downloads 43817213 Effect of Fiscal Policy on Growth in India
Authors: Parma Chakravartti
Abstract:
The impact of government spending and taxation on economic growth has remained a central issue of fiscal policy analysis. There is a wide range of opinions over the strength of fiscal policy’s effect on macroeconomic variables. It can be argued that the impact of fiscal policy depends on the structure and economic condition of the economy. This study makes an attempt to examine the effect of fiscal policy shocks on growth in India using the structural vector autoregressive model (SVAR), considering data from 1950 to 2019. The study finds that government spending is an important instrument of growth in India, where the share of revenue expenditure to capital expenditure plays a key role. The optimum composition of total expenditure is important for growth and it is not necessarily true that capital expenditure multiplier is more than revenue expenditure multiplier. The study also finds that the impact of public economic activities on private economic activities for both consumption expenditure and gross capital formation of government crowds in private consumption expenditure and private gross capital formation, respectively, thus indicating that government expenditure complements private expenditure in India.Keywords: government spending, fiscal policy, multiplier, growth
Procedia PDF Downloads 13517212 Model of the Increasing the Capacity of the Train and Railway Track by Using the New Type of Wagon
Authors: Martin Kendra, Jaroslav Mašek, Juraj Čamaj, Martin Búda
Abstract:
The paper deals with possibilities of increase train capacity by using a new type of railway wagon. In the first part is created a mathematical model to calculate the capacity of the train. The model is based on the main limiting parameters of the train - maximum number of axles per train, the maximum gross weight of the train, the maximum length of train and number of TEUs per one wagon. In the second part is the model applied to four different model trains with different composition of the train set and three different average weights of TEU and a train consisting of a new type of wagons. The result is to identify where the carrying capacity of the original trains is higher, respectively less than a capacity of the train consisting of a new type of wagons.Keywords: loading units, theoretical capacity model, train capacity, wagon for intermodal transport
Procedia PDF Downloads 50017211 Modeling and Simulation Methods Using MATLAB/Simulink
Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,
Abstract:
This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)
Procedia PDF Downloads 34617210 Competition, Stability, and Economic Growth: A Causality Approach
Authors: Mahvish Anwaar
Abstract:
Research Question: In this paper, we explore the causal relationship between banking competition, banking stability, and economic growth. Research Findings: The unbalanced panel data starting from 2000 to 2018 is collected to analyze the causality among banking competition, banking stability, and economic growth. The main focus of the study is to check the direction of causality among selected variables. The results of the study support the demand following, supply leading, feedback, and neutrality hypothesis conditional to different measures of banking competition, banking stability, and economic growth. Theoretical Implication: Jayakumar, Pradhan, Dash, Maradana, and Gaurav (2018) proposed a theoretical model of the causal relationship between banking competition, banking stability, and economic growth by using different indicators. So, we empirically test the proposed indicators in our study. This study makes a contribution to the literature by showing the defined relationship between developing and developed countries. Policy Implications: The study covers various policy implications regarding investors to analyze how to properly manage their finances, and government agencies will take help from the present study to find the best and most suitable policies by examining how the economy can grow concerning its finances.Keywords: competition, stability, economic growth, vector auto-regression, granger causality
Procedia PDF Downloads 6617209 Mixture statistical modeling for predecting mortality human immunodeficiency virus (HIV) and tuberculosis(TB) infection patients
Authors: Mohd Asrul Affendi Bi Abdullah, Nyi Nyi Naing
Abstract:
The purpose of this study was to identify comparable manner between negative binomial death rate (NBDR) and zero inflated negative binomial death rate (ZINBDR) with died patients with (HIV + T B+) and (HIV + T B−). HIV and TB is a serious world wide problem in the developing country. Data were analyzed with applying NBDR and ZINBDR to make comparison which a favorable model is better to used. The ZINBDR model is able to account for the disproportionately large number of zero within the data and is shown to be a consistently better fit than the NBDR model. Hence, as a results ZINBDR model is a superior fit to the data than the NBDR model and provides additional information regarding the died mechanisms HIV+TB. The ZINBDR model is shown to be a use tool for analysis death rate according age categorical.Keywords: zero inflated negative binomial death rate, HIV and TB, AIC and BIC, death rate
Procedia PDF Downloads 43517208 A Study of ZY3 Satellite Digital Elevation Model Verification and Refinement with Shuttle Radar Topography Mission
Authors: Bo Wang
Abstract:
As the first high-resolution civil optical satellite, ZY-3 satellite is able to obtain high-resolution multi-view images with three linear array sensors. The images can be used to generate Digital Elevation Models (DEM) through dense matching of stereo images. However, due to the clouds, forest, water and buildings covered on the images, there are some problems in the dense matching results such as outliers and areas failed to be matched (matching holes). This paper introduced an algorithm to verify the accuracy of DEM that generated by ZY-3 satellite with Shuttle Radar Topography Mission (SRTM). Since the accuracy of SRTM (Internal accuracy: 5 m; External accuracy: 15 m) is relatively uniform in the worldwide, it may be used to improve the accuracy of ZY-3 DEM. Based on the analysis of mass DEM and SRTM data, the processing can be divided into two aspects. The registration of ZY-3 DEM and SRTM can be firstly performed using the conjugate line features and area features matched between these two datasets. Then the ZY-3 DEM can be refined by eliminating the matching outliers and filling the matching holes. The matching outliers can be eliminated based on the statistics on Local Vector Binning (LVB). The matching holes can be filled by the elevation interpolated from SRTM. Some works are also conducted for the accuracy statistics of the ZY-3 DEM.Keywords: ZY-3 satellite imagery, DEM, SRTM, refinement
Procedia PDF Downloads 34717207 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers
Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus
Abstract:
Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.
Procedia PDF Downloads 55717206 A Microwave Heating Model for Endothermic Reaction in the Cement Industry
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing
Procedia PDF Downloads 14317205 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics
Authors: Sairi Satari
Abstract:
Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.Keywords: sigma matrics, analytical performance, total error, bias
Procedia PDF Downloads 17317204 Model-Based Software Regression Test Suite Reduction
Authors: Shiwei Deng, Yang Bao
Abstract:
In this paper, we present a model-based regression test suite reducing approach that uses EFSM model dependence analysis and probability-driven greedy algorithm to reduce software regression test suites. The approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. The EFSM dependence analysis is performed for each elementary modification to reduce the regression test suite, and then the probability-driven greedy algorithm is adopted to select the minimum set of test cases from the reduced regression test suite that cover all interaction patterns. Our initial experience shows that the approach may significantly reduce the size of regression test suites.Keywords: dependence analysis, EFSM model, greedy algorithm, regression test
Procedia PDF Downloads 43017203 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 43017202 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations
Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.
Abstract:
Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.Keywords: gamma incomplete, ewes, shape curves, modeling
Procedia PDF Downloads 7917201 Dynamics of a Susceptible-Infected-Recovered Model along with Time Delay, Modulated Incidence, and Nonlinear Treatment
Authors: Abhishek Kumar, Nilam
Abstract:
As we know that, time delay exists almost in every biological phenomenon. Therefore, in the present study, we propose a susceptible–infected–recovered (SIR) epidemic model along with time delay, modulated incidence rate of infection, and Holling Type II nonlinear treatment rate. The present model aims to provide a strategy to control the spread of epidemics. In the mathematical study of the model, it has been shown that the model has two equilibriums which are named as disease-free equilibrium (DFE) and endemic equilibrium (EE). Further, stability analysis of the model is discussed. To prove the stability of the model at DFE, we derived basic reproduction number, denoted by (R₀). With the help of basic reproduction number (R₀), we showed that the model is locally asymptotically stable at DFE when the basic reproduction number (R₀) less than unity and unstable when the basic reproduction number (R₀) is greater than unity. Furthermore, stability analysis of the model at endemic equilibrium has also been discussed. Finally, numerical simulations have been done using MATLAB 2012b to exemplify the theoretical results.Keywords: time delayed SIR epidemic model, modulated incidence rate, Holling type II nonlinear treatment rate, stability
Procedia PDF Downloads 16017200 The Use of Medical Biotechnology to Treat Genetic Disease
Authors: Rachel Matar, Maxime Merheb
Abstract:
Chemical drugs have been used for many centuries as the only way to cure diseases until the novel gene therapy has been created in 1960. Gene therapy is based on the insertion, correction, or inactivation of genes to treat people with genetic illness (1). Gene therapy has made wonders in Parkison’s, Alzheimer and multiple sclerosis. In addition to great promises in the healing of deadly diseases like many types of cancer and autoimmune diseases (2). This method implies the use of recombinant DNA technology with the help of different viral and non-viral vectors (3). It is nowadays used in somatic cells as well as embryos and gametes. Beside all the benefits of gene therapy, this technique is deemed by some opponents as an ethically unacceptable treatment as it implies playing with the genes of living organisms.Keywords: gene therapy, genetic disease, cancer, multiple sclerosis
Procedia PDF Downloads 54617199 Simulation of Flow through Dam Foundation by FEM and ANN Methods Case Study: Shahid Abbaspour Dam
Authors: Mehrdad Shahrbanozadeh, Gholam Abbas Barani, Saeed Shojaee
Abstract:
In this study, a finite element (Seep3D model) and an artificial neural network (ANN) model were developed to simulate flow through dam foundation. Seep3D model is capable of simulating three-dimensional flow through a heterogeneous and anisotropic, saturated and unsaturated porous media. Flow through the Shahid Abbaspour dam foundation has been used as a case study. The FEM with 24960 triangular elements and 28707 nodes applied to model flow through foundation of this dam. The FEM being made denser in the neighborhood of the curtain screen. The ANN model developed for Shahid Abbaspour dam is a feedforward four layer network employing the sigmoid function as an activator and the back-propagation algorithm for the network learning. The water level elevations of the upstream and downstream of the dam have been used as input variables and the piezometric heads as the target outputs in the ANN model. The two models are calibrated and verified using the Shahid Abbaspour’s dam piezometric data. Results of the models were compared with those measured by the piezometers which are in good agreement. The model results also revealed that the ANN model performed as good as and in some cases better than the FEM.Keywords: seepage, dam foundation, finite element method, neural network, seep 3D model
Procedia PDF Downloads 47717198 A Mixed Integer Linear Programming Model for Flexible Job Shop Scheduling Problem
Authors: Mohsen Ziaee
Abstract:
In this paper, a mixed integer linear programming (MILP) model is presented to solve the flexible job shop scheduling problem (FJSP). This problem is one of the hardest combinatorial problems. The objective considered is the minimization of the makespan. The computational results of the proposed MILP model were compared with those of the best known mathematical model in the literature in terms of the computational time. The results show that our model has better performance with respect to all the considered performance measures including relative percentage deviation (RPD) value, number of constraints, and total number of variables. By this improved mathematical model, larger FJS problems can be optimally solved in reasonable time, and therefore, the model would be a better tool for the performance evaluation of the approximation algorithms developed for the problem.Keywords: scheduling, flexible job shop, makespan, mixed integer linear programming
Procedia PDF Downloads 18917197 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 22217196 A New Prediction Model for Soil Compression Index
Authors: D. Mohammadzadeh S., J. Bolouri Bazaz
Abstract:
This paper presents a new prediction model for compression index of fine-grained soils using multi-gene genetic programming (MGGP) technique. The proposed model relates the soil compression index to its liquid limit, plastic limit and void ratio. Several laboratory test results for fine-grained were used to develop the models. Various criteria were considered to check the validity of the model. The parametric and sensitivity analyses were performed and discussed. The MGGP method was found to be very effective for predicting the soil compression index. A comparative study was further performed to prove the superiority of the MGGP model to the existing soft computing and traditional empirical equations.Keywords: new prediction model, compression index soil, multi-gene genetic programming, MGGP
Procedia PDF Downloads 37717195 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 13217194 BTG-BIBA: A Flexibility-Enhanced Biba Model Using BTG Strategies for Operating System
Authors: Gang Liu, Can Wang, Runnan Zhang, Quan Wang, Huimin Song, Shaomin Ji
Abstract:
Biba model can protect information integrity but might deny various non-malicious access requests of the subjects, thereby decreasing the availability in the system. Therefore, a mechanism that allows exceptional access control is needed. Break the Glass (BTG) strategies refer an efficient means for extending the access rights of users in exceptional cases. These strategies help to prevent a system from stagnation. An approach is presented in this work for integrating Break the Glass strategies into the Biba model. This research proposes a model, BTG-Biba, which provides both an original Biba model used in normal situations and a mechanism used in emergency situations. The proposed model is context aware, can implement a fine-grained type of access control and primarily solves cross-domain access problems. Finally, the flexibility and availability improvement with the use of the proposed model is illustrated.Keywords: Biba model, break the glass, context, cross-domain, fine-grained
Procedia PDF Downloads 54417193 Proposing a Strategic Management Maturity Model for Continues Innovation
Authors: Ferhat Demir
Abstract:
Even if strategic management is highly critical for all types of organizations, only a few maturity models have been proposed in business literature for the area of strategic management activities. This paper updates previous studies and presents a new conceptual model for assessing the maturity of strategic management in any organization. Strategic management maturity model (S-3M) is basically composed of 6 maturity levels with 7 dimensions. The biggest contribution of S-3M is to put innovation into agenda of strategic management. The main objective of this study is to propose a model to align innovation with business strategies. This paper suggests that innovation (breakthrough new products/services and business models) is the only way of creating sustainable growth and strategy studies cannot ignore this aspect. Maturity models should embrace innovation to respond dynamic business environment and rapidly changing customer behaviours.Keywords: strategic management, innovation, business model, maturity model
Procedia PDF Downloads 19617192 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 26817191 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques
Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt
Abstract:
Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.Keywords: forecasting, time series, auto regression, ARCH, ARMA
Procedia PDF Downloads 35017190 A Study on Thermal and Flow Characteristics by Solar Radiation for Single-Span Greenhouse by Computational Fluid Dynamics Simulation
Authors: Jonghyuk Yoon, Hyoungwoon Song
Abstract:
Recently, there are lots of increasing interest in a smart farming that represents application of modern Information and Communication Technologies (ICT) into agriculture since it provides a methodology to optimize production efficiencies by managing growing conditions of crops automatically. In order to obtain high performance and stability for smart greenhouse, it is important to identify the effect of various working parameters such as capacity of ventilation fan, vent opening area and etc. In the present study, a 3-dimensional CFD (Computational Fluid Dynamics) simulation for single-span greenhouse was conducted using the commercial program, Ansys CFX 18.0. The numerical simulation for single-span greenhouse was implemented to figure out the internal thermal and flow characteristics. In order to numerically model solar radiation that spread over a wide range of wavelengths, the multiband model that discretizes the spectrum into finite bands of wavelength based on Wien’s law is applied to the simulation. In addition, absorption coefficient of vinyl varied with the wavelength bands is also applied based on Beer-Lambert Law. To validate the numerical method applied herein, the numerical results of the temperature at specific monitoring points were compared with the experimental data. The average error rates (12.2~14.2%) between them was shown and numerical results of temperature distribution are in good agreement with the experimental data. The results of the present study can be useful information for the design of various greenhouses. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries (IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA)(315093-03).Keywords: single-span greenhouse, CFD (computational fluid dynamics), solar radiation, multiband model, absorption coefficient
Procedia PDF Downloads 13917189 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.Keywords: calibration, data modeling, industrial processes, machine learning
Procedia PDF Downloads 30317188 Multiscale Simulation of Ink Seepage into Fibrous Structures through a Mesoscopic Variational Model
Authors: Athmane Bakhta, Sebastien Leclaire, David Vidal, Francois Bertrand, Mohamed Cheriet
Abstract:
This work presents a new three-dimensional variational model proposed for the simulation of ink seepage into paper sheets at the fiber level. The model, inspired by the Hising model, takes into account a finite volume of ink and describes the system state through gravity, cohesion, and adhesion force interactions. At the mesoscopic scale, the paper substrate is modeled using a discretized fiber structure generated using a numerical deposition procedure. A modified Monte Carlo method is introduced for the simulation of the ink dynamics. Besides, a multiphase lattice Boltzmann method is suggested to fine-tune the mesoscopic variational model parameters, and it is shown that the ink seepage behaviors predicted by the proposed model can resemble those predicted by a method relying on first principles.Keywords: fibrous media, lattice Boltzmann, modelling and simulation, Monte Carlo, variational model
Procedia PDF Downloads 149