Search results for: stochastic%20frontier%20analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 448

Search results for: stochastic%20frontier%20analysis

118 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 168
117 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 380
116 Oil Demand Forecasting in China: A Structural Time Series Analysis

Authors: Tehreem Fatima, Enjun Xia

Abstract:

The research investigates the relationship between total oil consumption and transport oil consumption, GDP, oil price, and oil reserve in order to forecast future oil demand in China. Annual time series data is used over the period of 1980 to 2015, and for this purpose, an oil demand function is estimated by applying structural time series model (STSM). The technique also uncovers the Underline energy demand trend (UEDT) for China oil demand and GDP, oil reserve, oil price and UEDT are considering important drivers of China oil demand. The long-run elasticity of total oil consumption with respect to GDP and price are (0.5, -0.04) respectively while GDP, oil reserve, and price remain (0.17; 0.23; -0.05) respectively. Moreover, the Estimated results of long-run elasticity of transport oil consumption with respect to GDP and price are (0.5, -0.00) respectively long-run estimates remain (0.28; 37.76;-37.8) for GDP, oil reserve, and price respectively. For both model estimated underline energy demand trend (UEDT) remains nonlinear and stochastic and with an increasing trend of (UEDT) and based on estimated equations, it is predicted that China total oil demand somewhere will be 9.9 thousand barrel per day by 2025 as compare to 9.4 thousand barrel per day in 2015, while transport oil demand predicting value is 9.0 thousand barrel per day by 2020 as compare to 8.8 thousand barrel per day in 2015.

Keywords: china, forecasting, oil, structural time series model (STSM), underline energy demand trend (UEDT)

Procedia PDF Downloads 255
115 Effect of Credit Use on Technical Efficiency of Cassava Farmers in Ondo State, Nigeria

Authors: Adewale Oladapo, Carolyn A. Afolami

Abstract:

Agricultural production should be the major financial contributor to the Nigerian economy; however, the petroleum sector had taken the importance attached to this sector. The situation tends to be more worsening unless necessary attention is given to adequate credit supply among food crop farmers. This research analyses the effect of credit use on the technical efficiency of cassava farmers in Ondo State, Nigeria. Primary data were collected from two hundred randomly selected cassava farmers through a multistage sampling procedure in the study area. Data were analysed using descriptive statistics and stochastic frontier analysis (SFA). Findings revealed that 95.0% of the farmers were male while 56.0% had no formal education and were married. The SFA showed that cassava farmer’s efficiency increased with farm size, herbicide and planting material at 5%,10% and 1% respectively but decreased with fertilizer application at 1% level while farmers’ age, education, household size, experience and access to credit increased technical inefficiency at 10%. The study concluded that cassava farmers are technically inefficient in the use of farm resources and recommended that adequate and workable agricultural policy measures that will ensure availability and efficient fertilizer distribution should be put in place to increase efficiency. Furthermore, the government should encourage youth participation in cassava production and ensure improvement in farmer’s access to credit to increase farmer’s technical efficiency.

Keywords: agriculture, access to credit, cassava farmers, technical efficiency

Procedia PDF Downloads 154
114 Timing and Probability of Presurgical Teledermatology: Survival Analysis

Authors: Felipa de Mello-Sampayo

Abstract:

The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.

Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis

Procedia PDF Downloads 98
113 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows

Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang

Abstract:

We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.

Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis

Procedia PDF Downloads 16
112 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan

Authors: Bai-Chen Xie, Biao Li

Abstract:

With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.

Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison

Procedia PDF Downloads 70
111 Mean Field Model Interaction for Computer and Communication Systems: Modeling and Analysis of Wireless Sensor Networks

Authors: Irina A. Gudkova, Yousra Demigha

Abstract:

Scientific research is moving more and more towards the study of complex systems in several areas of economics, biology physics, and computer science. In this paper, we will work on complex systems in communication networks, Wireless Sensor Networks (WSN) that are considered as stochastic systems composed of interacting entities. The current advancements of the sensing in computing and communication systems is an investment ground for research in several tracks. A detailed presentation was made for the WSN, their use, modeling, different problems that can occur in their application and some solutions. The main goal of this work reintroduces the idea of mean field method since it is a powerful technique to solve this type of models especially systems that evolve according to a Continuous Time Markov Chain (CTMC). Modeling of a CTMC has been focused; we obtained a large system of interacting Continuous Time Markov Chain with population entities. The main idea was to work on one entity and replace the others with an average or effective interaction. In this context to make the solution easier, we consider a wireless sensor network as a multi-body problem and we reduce it to one body problem. The method was applied to a system of WSN modeled as a Markovian queue showing the results of the used technique.

Keywords: Continuous-Time Markov Chain, Hidden Markov Chain, mean field method, Wireless sensor networks

Procedia PDF Downloads 137
110 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 28
109 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues

Authors: Amirhossein Chambari

Abstract:

This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.

Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I

Procedia PDF Downloads 552
108 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)

Authors: Faisal Alsaaq

Abstract:

Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.

Keywords: hydrography, GNSS, datum, tide gauge

Procedia PDF Downloads 243
107 Optimization of Air Pollution Control Model for Mining

Authors: Zunaira Asif, Zhi Chen

Abstract:

The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.

Keywords: air pollution, linear programming, mining, optimization, treatment technologies

Procedia PDF Downloads 171
106 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 194
105 Simulating Elevated Rapid Transit System for Performance Analysis

Authors: Ran Etgar, Yuval Cohen, Erel Avineri

Abstract:

One of the major challenges of transportation in medium sized inner-cities (such as Tel-Aviv) is the last-mile solution. Personal rapid transit (PRT) seems like an applicable candidate for this, as it combines the benefits of personal (car) travel with the operational benefits of transit. However, the investment required for large area PRT grid is significant and there is a need to economically justify such investment by correctly evaluating the grid capacity. PRT main elements are small automated vehicles (sometimes referred to as podcars) operating on a network of specially built guideways. The research is looking at a specific concept of elevated PRT system. Literature review has revealed the drawbacks PRT modelling and simulation approaches, mainly due to the lack of consideration of technical and operational features of the system (such as headways, acceleration, safety issues); the detailed design of infrastructure (guideways, stations, and docks); the stochastic and sessional characteristics of demand; and safety regulations – all of them have a strong effect on the system performance. A highly detailed model of the system, developed in this research, is applying a discrete event simulation combined with an agent-based approach, to represent the system elements and the podecars movement logic. Applying a case study approach, the simulation model is used to study the capacity of the system, the expected throughput of the system, the utilization, and the level of service (journey time, waiting time, etc.).

Keywords: capacity, productivity measurement, PRT, simulation, transportation

Procedia PDF Downloads 127
104 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 267
103 Identification of Factors Affecting Technical Efficiency Sugar Cane Farming in East Java

Authors: Noor Rizkiyah, Djoko Koestiono, Budi Setiawan, Nuhfil Hanani

Abstract:

This research aims to identify the factors that affect the production of sugar cane, the level of technical efficiency of farming sugar cane ratooning and factors that affect technical inefficiency. Research carried out in Malang of East Java with sampling in a non random sampling stratified proportioned and obtained 172 household sugar cane farmers who are classified based on the level of ratooning i.e. ratooning I 3-4 times ratoning, ratooning II 5-10 times ratoning as well as ratooning III > 10 times ratoning. The method used is the Stochastic Production Frontier approach MLE (maximum likelihood estimation). From the results obtained by analysis of the factors affecting the production of sugar cane farming land, namely ratooning fertilizer use ZA petroganic, use of fertilizer and seeds of embroidery and labor. While the average level of sugar cane farmers ratooning efficiency of 0.78 and categorized yet efficient technically. For the factors that influence the technical inefficiency i.e. age, number of dependents and the frequency of family ratooning. Though not yet technically efficient but sugar cane farmers cultivate cultivation remains ratooning. But if it is done repeatedly ratooning will result in a decrease in the production of sugar cane. Whereas the results of the analysis of farming level of feasibility or RC ratooning sugar cane ratio of 1.15 so worth trying to accomplish. Thus with increased technology and combining the use of inputs is an attempt to let the technical efficiency can be achieved so that the more worthy to be organised.

Keywords: technical efficiency, production, sugarcane, frontier

Procedia PDF Downloads 140
102 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 268
101 Generating Spherical Surface of Wear Drain in Cutting Metal by Finite Element Method Analysis

Authors: D. Kabeya Nahum, L. Y. Kabeya Mukeba

Abstract:

In this work, the design of surface defects some support of the anchor rod ball joint. The future adhesion contact was rocking in manufacture machining, for giving by the numerical analysis of a short simple solution of thermo-mechanical coupled problem in process engineering. The analysis of geometrical evaluation and the quasi-static and dynamic states are discussed in kinematic dimensional tolerances onto surfaces of part. Geometric modeling using the finite element method (FEM) in rough part of such phase provides an opportunity to solve the nonlinearity behavior observed by empirical data to improve the discrete functional surfaces. The open question here is to obtain spherical geometry of drain wear with the operation of rolling. The formulation with (1 ± 0.01) mm thickness near the drain wear semi-finishing tool for studying different angles, do not help the professional factor in design cutting metal related vibration, friction and interface solid-solid of part and tool during this physical complex process, with multi-parameters no-defined in Sobolev Spaces. The stochastic approach of cracking, wear and fretting due to the cutting forces face boundary layers small dimensions thickness of the workpiece and the tool in the machining position is predicted neighbor to ‘Yakam Matrix’.

Keywords: FEM, geometry, part, simulation, spherical surface engineering, tool, workpiece

Procedia PDF Downloads 250
100 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 119
99 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 260
98 The Strategic Entering Time of a Commerce Platform

Authors: Chia-li Wang

Abstract:

The surge of service and commerce platforms, such as e-commerce and internet-of-things, have rapidly changed our lives. How to avoid the congestion and get the job done in the platform is now a common problem that many people encounter every day. This requires platform users to make decisions about when to enter the platform. To that end, we investigate the strategic entering time of a simple platform containing random numbers of buyers and sellers of some item. Upon a trade, the buyer and the seller gain respective profits, yet they pay the cost of waiting in the platform. To maximize their expected payoffs from trading, both buyers and sellers can choose their entering times. This creates an interesting and practical framework of a game that is played among buyers, among sellers, and between them. That is, a strategy employed by a player is not only against players of its type but also a response to those of the other type, and, thus, a strategy profile is composed of strategies of buyers and sellers. The players' best response, the Nash equilibrium (NE) strategy profile, is derived by a pair of differential equations, which, in turn, are used to establish its existence and uniqueness. More importantly, its structure sheds valuable insights of how the entering strategy of one side (buyers or sellers) is affected by the entering behavior of the other side. These results provide a base for the study of dynamic pricing for stochastic demand-supply imbalances. Finally, comparisons between the social welfares (the sum of the payoffs incurred by individual participants) obtained by the optimal strategy and by the NE strategy are conducted for showing the efficiency loss relative to the socially optimal solution. That should help to manage the platform better.

Keywords: double-sided queue, non-cooperative game, nash equilibrium, price of anarchy

Procedia PDF Downloads 64
97 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 376
96 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 111
95 The Analysis of Emergency Shutdown Valves Torque Data in Terms of Its Use as a Health Indicator for System Prognostics

Authors: Ewa M. Laskowska, Jorn Vatn

Abstract:

Industry 4.0 focuses on digital optimization of industrial processes. The idea is to use extracted data in order to build a decision support model enabling use of those data for real time decision making. In terms of predictive maintenance, the desired decision support tool would be a model enabling prognostics of system's health based on the current condition of considered equipment. Within area of system prognostics and health management, a commonly used health indicator is Remaining Useful Lifetime (RUL) of a system. Because the RUL is a random variable, it has to be estimated based on available health indicators. Health indicators can be of different types and come from different sources. They can be process variables, equipment performance variables, data related to number of experienced failures, etc. The aim of this study is the analysis of performance variables of emergency shutdown valves (ESV) used in oil and gas industry. ESV is inspected periodically, and at each inspection torque and time of valve operation are registered. The data will be analyzed by means of machine learning or statistical analysis. The purpose is to investigate whether the available data could be used as a health indicator for a prognostic purpose. The second objective is to examine what is the most efficient way to incorporate the data into predictive model. The idea is to check whether the data can be applied in form of explanatory variables in Markov process or whether other stochastic processes would be a more convenient to build an RUL model based on the information coming from registered data.

Keywords: emergency shutdown valves, health indicator, prognostics, remaining useful lifetime, RUL

Procedia PDF Downloads 67
94 A TgCNN-Based Surrogate Model for Subsurface Oil-Water Phase Flow under Multi-Well Conditions

Authors: Jian Li

Abstract:

The uncertainty quantification and inversion problems of subsurface oil-water phase flow usually require extensive repeated forward calculations for new runs with changed conditions. To reduce the computational time, various forms of surrogate models have been built. Related research shows that deep learning has emerged as an effective surrogate model, while most surrogate models with deep learning are purely data-driven, which always leads to poor robustness and abnormal results. To guarantee the model more consistent with the physical laws, a coupled theory-guided convolutional neural network (TgCNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy. The model is a convolutional neural network based on multi-well reservoir simulation. The core notion of this proposed method is to bridge two separate blocks on top of an overall network. They underlie the TgCNN model in a coupled form, which reflects the coupling nature of pressure and water saturation in the two-phase flow equation. The model is driven by not only labeled data but also scientific theories, including governing equations, stochastic parameterization, boundary, and initial conditions, well conditions, and expert knowledge. The results show that the TgCNN-based surrogate model exhibits satisfactory accuracy and efficiency in subsurface oil-water phase flow under multi-well conditions.

Keywords: coupled theory-guided convolutional neural network, multi-well conditions, surrogate model, subsurface oil-water phase

Procedia PDF Downloads 60
93 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 100
92 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System

Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky

Abstract:

Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.

Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion

Procedia PDF Downloads 207
91 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 381
90 The Impact of Window Opening Occupant Behavior Models on Building Energy Performance

Authors: Habtamu Tkubet Ebuy

Abstract:

Purpose Conventional dynamic energy simulation tools go beyond the static dimension of simplified methods by providing better and more accurate prediction of building performance. However, their ability to forecast actual performance is undermined by a low representation of human interactions. The purpose of this study is to examine the potential benefits of incorporating information on occupant diversity into occupant behavior models used to simulate building performance. The co-simulation of the stochastic behavior of the occupants substantially increases the accuracy of the simulation. Design/methodology/approach In this article, probabilistic models of the "opening and closing" behavior of the window of inhabitants have been developed in a separate multi-agent platform, SimOcc, and implemented in the building simulation, TRNSYS, in such a way that the behavior of the window with the interconnectivity can be reflected in the simulation analysis of the building. Findings The results of the study prove that the application of complex behaviors is important to research in predicting actual building performance. The results aid in the identification of the gap between reality and existing simulation methods. We hope this study and its results will serve as a guide for researchers interested in investigating occupant behavior in the future. Research limitations/implications Further case studies involving multi-user behavior for complex commercial buildings need to more understand the impact of the occupant behavior on building performance. Originality/value This study is considered as a good opportunity to achieve the national strategy by showing a suitable tool to help stakeholders in the design phase of new or retrofitted buildings to improve the performance of office buildings.

Keywords: occupant behavior, co-simulation, energy consumption, thermal comfort

Procedia PDF Downloads 73
89 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS

Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong

Abstract:

With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.

Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition

Procedia PDF Downloads 343