Search results for: stochastic noises
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 529

Search results for: stochastic noises

139 Analysis of Temporal Factors Influencing Minimum Dwell Time Distributions

Authors: T. Pedersen, A. Lindfeldt

Abstract:

The minimum dwell time is an important part of railway timetable planning. Due to its stochastic behaviour, the minimum dwell time should be considered to create resilient timetables. While there has been significant focus on how to determine and estimate dwell times, to our knowledge, little research has been carried out regarding temporal and running direction variations of these. In this paper, we examine how the minimum dwell time varies depending on temporal factors such as the time of day, day of the week and time of the year. We also examine how it is affected by running direction and station type. The minimum dwell time is estimated by means of track occupation data. A method is proposed to ensure that only minimum dwell times and not planned dwell times are acquired from the track occupation data. The results show that on an aggregated level, the average minimum dwell times in both running directions at a station are similar. However, when temporal factors are considered, there are significant variations. The minimum dwell time varies throughout the day with peak hours having the longest dwell times. It is also found that the minimum dwell times are influenced by weekday, and in particular, weekends are found to have lower minimum dwell times than most other days. The findings show that there is a potential to significantly improve timetable planning by taking minimum dwell time variations into account.

Keywords: minimum dwell time, operations quality, timetable planning, track occupation data

Procedia PDF Downloads 194
138 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 340
137 Synergy and Complementarity in Technology-Intensive Manufacturing Networks

Authors: Daidai Shen, Jean Claude Thill, Wenjia Zhang

Abstract:

This study explores the dynamics of synergy and complementarity within city networks, specifically focusing on the headquarters-subsidiary relations of firms. We begin by defining these two types of networks and establishing their pivotal roles in shaping city network structures. Utilizing the mesoscale analytic approach of weighted stochastic block modeling, we discern relational patterns between city pairs and determine connection strengths through statistical inference. Furthermore, we introduce a community detection approach to uncover the underlying structure of these networks using advanced statistical methods. Our analysis, based on comprehensive network data up to 2017, reveals the coexistence of both complementarity and synergy networks within China’s technology-intensive manufacturing cities. Notably, firms in technology hardware and office & computing machinery predominantly contribute to the complementarity city networks. In contrast, a distinct synergy city network, underpinned by the cities of Suzhou and Dongguan, emerges amidst the expansive complementarity structures in technology hardware and equipment. These findings provide new insights into the relational dynamics and structural configurations of city networks in the context of technology-intensive manufacturing, highlighting the nuanced interplay between synergy and complementarity.

Keywords: city system, complementarity, synergy network, higher-order network

Procedia PDF Downloads 37
136 Health Outcomes and Economic Growth Nexus: Testing for Long-run Relationships and Causal Links in Nigeria

Authors: Haruna Modibbo Usman, Mustapha Muktar, Nasiru Inuwa

Abstract:

This paper examined the long run relationship between health outcomes and economic growth in Nigeria from 1961 to 2012. Using annual time series data, Augmented Dickey-Fuller (ADF) test is conducted to check the stochastic properties of the variables. Also, the long run relationship among the variables is confirmed based on Johansen Multivariate Cointegration approach whereas the long run and short run dynamics are observed using Vector Error Correction Mechanism (VECM). In addition, VEC Granger causality test is employed to examine the direction of causality among the variables. On the whole, the results obtained revealed the existence of a long run relationship between health outcomes and economic growth in Nigeria and that both life expectancy and crude death rate as measures of health are found to have a long run negative and statistically significant impact on the economic growth over the study period. This is further buttressed by the results of Granger causality test which indicated the existence of unidirectional causality running from life expectancy and crude death rate to economic growth. The study therefore, calls for governments at various levels to create preconditions for health improvements in Nigeria in order to boost the level of health outcomes.

Keywords: cointegration, economic growth, Granger causality, health outcomes, VECM

Procedia PDF Downloads 483
135 Explicit Numerical Approximations for a Pricing Weather Derivatives Model

Authors: Clarinda V. Nhangumbe, Ercília Sousa

Abstract:

Weather Derivatives are financial instruments used to cover non-catastrophic weather events and can be expressed in the form of standard or plain vanilla products, structured or exotics products. The underlying asset, in this case, is the weather index, such as temperature, rainfall, humidity, wind, and snowfall. The complexity of the Weather Derivatives structure shows the weakness of the Black Scholes framework. Therefore, under the risk-neutral probability measure, the option price of a weather contract can be given as a unique solution of a two-dimensional partial differential equation (parabolic in one direction and hyperbolic in other directions), with an initial condition and subjected to adequate boundary conditions. To calculate the price of the option, one can use numerical methods such as the Monte Carlo simulations and implicit finite difference schemes conjugated with Semi-Lagrangian methods. This paper is proposed two explicit methods, namely, first-order upwind in the hyperbolic direction combined with Lax-Wendroff in the parabolic direction and first-order upwind in the hyperbolic direction combined with second-order upwind in the parabolic direction. One of the advantages of these methods is the fact that they take into consideration the boundary conditions obtained from the financial interpretation and deal efficiently with the different choices of the convection coefficients.

Keywords: incomplete markets, numerical methods, partial differential equations, stochastic process, weather derivatives

Procedia PDF Downloads 80
134 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 182
133 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: quasigeoid, gravity aomalies, covariance, GGM

Procedia PDF Downloads 131
132 Measurements for Risk Analysis and Detecting Hazards by Active Wearables

Authors: Werner Grommes

Abstract:

Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.

Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects

Procedia PDF Downloads 103
131 An Overbooking Model for Car Rental Service with Different Types of Cars

Authors: Naragain Phumchusri, Kittitach Pongpairoj

Abstract:

Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.

Keywords: overbooking, car rental industry, revenue management, stochastic model

Procedia PDF Downloads 163
130 Analyzing Risk and Expected Return of Lenders in the Shared Mortgage Program of Korea

Authors: Keunock Lew, Seungryul Ma

Abstract:

The paper analyzes risk and expected return of lenders who provide mortgage loans to households in the shared mortgage program of Korea. In 2013, the Korean government introduced the mortgage program to help low income householders to convert their renting into purchasing houses. The financial source for the mortgage program is the Urban Housing Fund set up by the Korean government. Through the program, low income households can borrow money from lenders to buy a house at a very low interest rate (e.g. 1 % per year) for a long time. The motivation of adopting this mortgage program by the Korean government is that the cost of renting houses has been rapidly increased especially in large urban areas during the past decade, which became financial difficulties to low income households who do not have their own houses. As the analysis methodology, the paper uses a spread sheet model for projecting cash flows of the mortgage product over the period of loan contract. It also employs Monte Carlo simulation method to analyze the risk and expected yield of the lenders with assumption that the future housing price and market rate of interest follow a stochastic process. The study results will give valuable implications to the Korean government and lenders who want to stabilize the mortgage program and innovate the related loan products.

Keywords: expected return, Monte Carlo simulation, risk, shared mortgage program

Procedia PDF Downloads 267
129 Optimizing a Hybrid Inventory System with Random Demand and Lead Time

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Implementing either periodic or continuous inventory review model within most manufacturing-companies-supply chains as a management tool may incur higher costs. These high costs affect the system flexibility which in turn affects the level of service required to satisfy customers. However, these effects are not clearly understood because the parameters of both inventory review policies (protection demand interval, order quantity, etc.) are not designed to be fully utilized under different and uncertain conditions such as poor manufacturing, supplies and delivery performance. Coming up with a hybrid model which may combine in some sense the feature of both continuous and a periodic inventory review models should be useful. Therefore, there is a need to build and evaluate such hybrid model on the annual total cost, stock out probability and system’s flexibility in order to search for the most cost effective inventory review model. This work also seeks to find the optimal sets of parameters of inventory management under stochastic condition so as to optimise each policy independently. The results reveal that a continuous inventory system always incurs lesser cost than a periodic (R, S) inventory system, but this difference tends to decrease as time goes by. Although the hybrid inventory is the only one that can yield lesser cost over time, it is not always desirable but also natural to use it in order to help the system to meet high performance specification.

Keywords: demand and lead time randomness, hybrid Inventory model, optimization, supply chain

Procedia PDF Downloads 308
128 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 184
127 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 408
126 Oil Demand Forecasting in China: A Structural Time Series Analysis

Authors: Tehreem Fatima, Enjun Xia

Abstract:

The research investigates the relationship between total oil consumption and transport oil consumption, GDP, oil price, and oil reserve in order to forecast future oil demand in China. Annual time series data is used over the period of 1980 to 2015, and for this purpose, an oil demand function is estimated by applying structural time series model (STSM). The technique also uncovers the Underline energy demand trend (UEDT) for China oil demand and GDP, oil reserve, oil price and UEDT are considering important drivers of China oil demand. The long-run elasticity of total oil consumption with respect to GDP and price are (0.5, -0.04) respectively while GDP, oil reserve, and price remain (0.17; 0.23; -0.05) respectively. Moreover, the Estimated results of long-run elasticity of transport oil consumption with respect to GDP and price are (0.5, -0.00) respectively long-run estimates remain (0.28; 37.76;-37.8) for GDP, oil reserve, and price respectively. For both model estimated underline energy demand trend (UEDT) remains nonlinear and stochastic and with an increasing trend of (UEDT) and based on estimated equations, it is predicted that China total oil demand somewhere will be 9.9 thousand barrel per day by 2025 as compare to 9.4 thousand barrel per day in 2015, while transport oil demand predicting value is 9.0 thousand barrel per day by 2020 as compare to 8.8 thousand barrel per day in 2015.

Keywords: china, forecasting, oil, structural time series model (STSM), underline energy demand trend (UEDT)

Procedia PDF Downloads 279
125 Effect of Credit Use on Technical Efficiency of Cassava Farmers in Ondo State, Nigeria

Authors: Adewale Oladapo, Carolyn A. Afolami

Abstract:

Agricultural production should be the major financial contributor to the Nigerian economy; however, the petroleum sector had taken the importance attached to this sector. The situation tends to be more worsening unless necessary attention is given to adequate credit supply among food crop farmers. This research analyses the effect of credit use on the technical efficiency of cassava farmers in Ondo State, Nigeria. Primary data were collected from two hundred randomly selected cassava farmers through a multistage sampling procedure in the study area. Data were analysed using descriptive statistics and stochastic frontier analysis (SFA). Findings revealed that 95.0% of the farmers were male while 56.0% had no formal education and were married. The SFA showed that cassava farmer’s efficiency increased with farm size, herbicide and planting material at 5%,10% and 1% respectively but decreased with fertilizer application at 1% level while farmers’ age, education, household size, experience and access to credit increased technical inefficiency at 10%. The study concluded that cassava farmers are technically inefficient in the use of farm resources and recommended that adequate and workable agricultural policy measures that will ensure availability and efficient fertilizer distribution should be put in place to increase efficiency. Furthermore, the government should encourage youth participation in cassava production and ensure improvement in farmer’s access to credit to increase farmer’s technical efficiency.

Keywords: agriculture, access to credit, cassava farmers, technical efficiency

Procedia PDF Downloads 176
124 Timing and Probability of Presurgical Teledermatology: Survival Analysis

Authors: Felipa de Mello-Sampayo

Abstract:

The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.

Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis

Procedia PDF Downloads 121
123 Mutagenesis, Oxidative Stress Induction and Blood Cytokine Profile in First Generation Male Rats Whose Parents Were Exposed to Radiation and Hexavalent Chromium

Authors: Yerbolat Iztleuov

Abstract:

Stochastic effects, which are currently largely associated with exposure to ionizing radiation or a combination of ionizing radiation with other chemical, physical, and biological agents, are expressed in the form of various mutations. In the first stage of the study, rats of both sexes were divided into 3 groups. 1st - control group, animals of the 2nd group were exposed to gamma radiation at a dose of 0.2 Gy. The third group received hexavalent chromium in a dose of 180 mg/ l with drinking water for a month before irradiation and a day after the end of chromium consumption and was subjected to total gamma irradiation at a dose of 0.2 Gy. The second stage of the experiment. After 3 days, the males were mated with the females. The obtained offspring were studied for peroxidation, cytokine profile and micronucleus in the nuclei. This study shows that 5-month-old offspring whose parents were exposed to combined exposure to chromium and γ-irradiation exhibit hereditary instability of the genome, decreased activity of antioxidant enzymes and sulfhydryl blood groups, and increased levels of lipid peroxidation. There is also an increase in the level of inflammatory markers (IL-6 and TNF) in the blood plasma against the background of a decrease in anti-inflammatory cytokine (IL-10). Thus, the combined effect of hexavalent chromium and ionizing radiation can lead to the development of an oncological process.

Keywords: hexavalent chromium, ionizing radiation, first generation, oxidative stress, cytokines, mutagenesis, cancer

Procedia PDF Downloads 13
122 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows

Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang

Abstract:

We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.

Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis

Procedia PDF Downloads 41
121 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 297
120 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 447
119 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan

Authors: Bai-Chen Xie, Biao Li

Abstract:

With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.

Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison

Procedia PDF Downloads 93
118 Mean Field Model Interaction for Computer and Communication Systems: Modeling and Analysis of Wireless Sensor Networks

Authors: Irina A. Gudkova, Yousra Demigha

Abstract:

Scientific research is moving more and more towards the study of complex systems in several areas of economics, biology physics, and computer science. In this paper, we will work on complex systems in communication networks, Wireless Sensor Networks (WSN) that are considered as stochastic systems composed of interacting entities. The current advancements of the sensing in computing and communication systems is an investment ground for research in several tracks. A detailed presentation was made for the WSN, their use, modeling, different problems that can occur in their application and some solutions. The main goal of this work reintroduces the idea of mean field method since it is a powerful technique to solve this type of models especially systems that evolve according to a Continuous Time Markov Chain (CTMC). Modeling of a CTMC has been focused; we obtained a large system of interacting Continuous Time Markov Chain with population entities. The main idea was to work on one entity and replace the others with an average or effective interaction. In this context to make the solution easier, we consider a wireless sensor network as a multi-body problem and we reduce it to one body problem. The method was applied to a system of WSN modeled as a Markovian queue showing the results of the used technique.

Keywords: Continuous-Time Markov Chain, Hidden Markov Chain, mean field method, Wireless sensor networks

Procedia PDF Downloads 156
117 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 64
116 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues

Authors: Amirhossein Chambari

Abstract:

This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.

Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I

Procedia PDF Downloads 575
115 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)

Authors: Faisal Alsaaq

Abstract:

Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.

Keywords: hydrography, GNSS, datum, tide gauge

Procedia PDF Downloads 257
114 Optimization of Air Pollution Control Model for Mining

Authors: Zunaira Asif, Zhi Chen

Abstract:

The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.

Keywords: air pollution, linear programming, mining, optimization, treatment technologies

Procedia PDF Downloads 202
113 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 214
112 Simulating Elevated Rapid Transit System for Performance Analysis

Authors: Ran Etgar, Yuval Cohen, Erel Avineri

Abstract:

One of the major challenges of transportation in medium sized inner-cities (such as Tel-Aviv) is the last-mile solution. Personal rapid transit (PRT) seems like an applicable candidate for this, as it combines the benefits of personal (car) travel with the operational benefits of transit. However, the investment required for large area PRT grid is significant and there is a need to economically justify such investment by correctly evaluating the grid capacity. PRT main elements are small automated vehicles (sometimes referred to as podcars) operating on a network of specially built guideways. The research is looking at a specific concept of elevated PRT system. Literature review has revealed the drawbacks PRT modelling and simulation approaches, mainly due to the lack of consideration of technical and operational features of the system (such as headways, acceleration, safety issues); the detailed design of infrastructure (guideways, stations, and docks); the stochastic and sessional characteristics of demand; and safety regulations – all of them have a strong effect on the system performance. A highly detailed model of the system, developed in this research, is applying a discrete event simulation combined with an agent-based approach, to represent the system elements and the podecars movement logic. Applying a case study approach, the simulation model is used to study the capacity of the system, the expected throughput of the system, the utilization, and the level of service (journey time, waiting time, etc.).

Keywords: capacity, productivity measurement, PRT, simulation, transportation

Procedia PDF Downloads 161
111 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 288
110 Identification of Factors Affecting Technical Efficiency Sugar Cane Farming in East Java

Authors: Noor Rizkiyah, Djoko Koestiono, Budi Setiawan, Nuhfil Hanani

Abstract:

This research aims to identify the factors that affect the production of sugar cane, the level of technical efficiency of farming sugar cane ratooning and factors that affect technical inefficiency. Research carried out in Malang of East Java with sampling in a non random sampling stratified proportioned and obtained 172 household sugar cane farmers who are classified based on the level of ratooning i.e. ratooning I 3-4 times ratoning, ratooning II 5-10 times ratoning as well as ratooning III > 10 times ratoning. The method used is the Stochastic Production Frontier approach MLE (maximum likelihood estimation). From the results obtained by analysis of the factors affecting the production of sugar cane farming land, namely ratooning fertilizer use ZA petroganic, use of fertilizer and seeds of embroidery and labor. While the average level of sugar cane farmers ratooning efficiency of 0.78 and categorized yet efficient technically. For the factors that influence the technical inefficiency i.e. age, number of dependents and the frequency of family ratooning. Though not yet technically efficient but sugar cane farmers cultivate cultivation remains ratooning. But if it is done repeatedly ratooning will result in a decrease in the production of sugar cane. Whereas the results of the analysis of farming level of feasibility or RC ratooning sugar cane ratio of 1.15 so worth trying to accomplish. Thus with increased technology and combining the use of inputs is an attempt to let the technical efficiency can be achieved so that the more worthy to be organised.

Keywords: technical efficiency, production, sugarcane, frontier

Procedia PDF Downloads 165