Search results for: stochastic uncertainty analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27564

Search results for: stochastic uncertainty analysis

27264 Application of IF Rough Data on Knowledge Towards Malaria of Rural Tribal Communities in Tripura

Authors: Chhaya Gangwal, R. N. Bhaumik, Shishir Kumar

Abstract:

Handling uncertainty and impreciseness of knowledge appears to be a challenging task in Information Systems. Intuitionistic fuzzy (IF) and rough set theory enhances databases by allowing it for the management of uncertainty and impreciseness. This paper presents a new efficient query optimization technique for the multi-valued or imprecise IF rough database. The usefulness of this technique was illustrated on malaria knowledge from the rural tribal communities of Tripura where most of the information is multi-valued and imprecise. Then, the querying about knowledge on malaria is executed into SQL server to make the implementation of IF rough data querying simpler.

Keywords: intuitionistic fuzzy set, rough set, relational database, IF rough relational database

Procedia PDF Downloads 410
27263 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 107
27262 Long Term Examination of the Profitability Estimation Focused on Benefits

Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke

Abstract:

Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.

Keywords: cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis

Procedia PDF Downloads 412
27261 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells

Authors: Ali Akbar, Seungho Shin, Sukkee Um

Abstract:

The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.

Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling

Procedia PDF Downloads 91
27260 Bayesian Meta-Analysis to Account for Heterogeneity in Studies Relating Life Events to Disease

Authors: Elizabeth Stojanovski

Abstract:

Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance, and within study variance and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.

Keywords: random-effects, meta-analysis, Bayesian, variation

Procedia PDF Downloads 130
27259 Magneto-Rheological Damper Based Semi-Active Robust H∞ Control of Civil Structures with Parametric Uncertainties

Authors: Vedat Senol, Gursoy Turan, Anders Helmersson, Vortechz Andersson

Abstract:

In developing a mathematical model of a real structure, the simulation results of the model may not match the real structural response. This is a general problem that arises during dynamic motion of the structure, which may be modeled by means of parameter variations in the stiffness, damping, and mass matrices. These changes in parameters need to be estimated, and the mathematical model is updated to obtain higher control performances and robustness. In this study, a linear fractional transformation (LFT) is utilized for uncertainty modeling. Further, a general approach to the design of an H∞ control of a magneto-rheological damper (MRD) for vibration reduction in a building with mass, damping, and stiffness uncertainties is presented.

Keywords: uncertainty modeling, structural control, MR Damper, H∞, robust control

Procedia PDF Downloads 115
27258 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand

Authors: Esma Birisci, Ronald McGarvey

Abstract:

One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.

Keywords: environmental studies, food waste, production planning, uncertain and correlated demand

Procedia PDF Downloads 341
27257 Uncertainty in Near-Term Global Surface Warming Linked to Pacific Trade Wind Variability

Authors: M. Hadi Bordbar, Matthew England, Alex Sen Gupta, Agus Santoso, Andrea Taschetto, Thomas Martin, Wonsun Park, Mojib Latif

Abstract:

Climate models generally simulate long-term reductions in the Pacific Walker Circulation with increasing atmospheric greenhouse gases. However, over two recent decades (1992-2011) there was a strong intensification of the Pacific Trade Winds that is linked with a slowdown in global surface warming. Using large ensembles of multiple climate models forced by increasing atmospheric greenhouse gas concentrations and starting from different ocean and/or atmospheric initial conditions, we reveal very diverse 20-year trends in the tropical Pacific climate associated with a considerable uncertainty in the globally averaged surface air temperature (SAT) in each model ensemble. This result suggests low confidence in our ability to accurately predict SAT trends over 20-year timescale only from external forcing. We show, however, that the uncertainty can be reduced when the initial oceanic state is adequately known and well represented in the model. Our analyses suggest that internal variability in the Pacific trade winds can mask the anthropogenic signal over a 20-year time frame, and drive transitions between periods of accelerated global warming and temporary slowdown periods.

Keywords: trade winds, walker circulation, hiatus in the global surface warming, internal climate variability

Procedia PDF Downloads 232
27256 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Authors: Saheed A. Gbadegeshin

Abstract:

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention

Procedia PDF Downloads 302
27255 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics

Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova

Abstract:

We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.

Keywords: cybersecurity, epidemiology, cyber epidemiology, malware

Procedia PDF Downloads 83
27254 Efficient Sampling of Probabilistic Program for Biological Systems

Authors: Keerthi S. Shetty, Annappa Basava

Abstract:

In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.

Keywords: systems biology, probabilistic model, inference, biology, model

Procedia PDF Downloads 311
27253 Next Generation UK Storm Surge Model for the Insurance Market: The London Case

Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky

Abstract:

Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.

Keywords: storm surge, stochastic model, levee failure, Thames River

Procedia PDF Downloads 208
27252 Decision Support Tool for Green Roofs Selection: A Multicriteria Analysis

Authors: I. Teotónio, C.O. Cruz, C.M. Silva, M. Manso

Abstract:

Diverse stakeholders show different concerns when choosing green roof systems. Also, green roof solutions vary in their cost and performance. Therefore, decision-makers continually face the difficult task of balancing benefits against green roofs costs. Decision analysis methods, as multicriteria analysis, can be used when the decision‑making process includes different perspectives, multiple objectives, and uncertainty. The present study adopts a multicriteria decision model to evaluate the installation of green roofs in buildings, determining the solution with the best trade-off between costs and benefits in agreement with the preferences of the users/investors. This methodology was applied to a real decision problem, assessing the preferences between different green roof systems in an existing building in Lisbon. This approach supports the decision-making process on green roofs and enables robust and informed decisions on urban planning while optimizing buildings retrofitting.

Keywords: decision making, green roofs, investors preferences, multicriteria analysis, sustainable development

Procedia PDF Downloads 153
27251 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 266
27250 Measurement of Influence of the COVID-19 Pandemic on Efficiency of Japan’s Railway Companies

Authors: Hideaki Endo, Mika Goto

Abstract:

The global outbreak of the COVID-19 pandemic has seriously affected railway businesses. The number of railway passengers decreased due to the decline in the number of commuters and business travelers to avoid crowded trains and a sharp drop in inbound tourists visiting Japan. This has affected not only railway businesses but also related businesses, including hotels, leisure businesses, and retail businesses at station buildings. In 2021, the companies were divided into profitable and loss-making companies. This division suggests that railway companies, particularly loss-making companies, needed to decrease operational inefficiency. To measure the impact of COVID-19 and discuss the sustainable management strategies of railway companies, we examine the cost inefficiency of Japanese listed railway companies by applying stochastic frontier analysis (SFA) to their operational and financial data. First, we employ the stochastic frontier cost function approach to measure inefficiency. The cost frontier function is formulated as a Cobb–Douglas type, and we estimated parameters and variables for inefficiency. This study uses panel data comprising 26 Japanese-listed railway companies from 2005 to 2020. This period includes several events deteriorating the business environment, such as the financial crisis from 2007 to 2008 and the Great East Japan Earthquake of 2011, and we compare those impacts with those of the COVID-19 pandemic after 2020. Second, we identify the characteristics of the best-practice railway companies and examine the drivers of cost inefficiencies. Third, we analyze the factors influencing cost inefficiency by comparing the profiles of the top 10 railway companies and others before and during the pandemic. Finally, we examine the relationship between cost inefficiency and the implementation of efficiency measures for each railway company. We obtained the following four findings. First, most Japanese railway companies showed the lowest cost inefficiency (most efficient) in 2014 and the highest in 2020 (least efficient) during the COVID-19 pandemic. The second worst occurred in 2009 when it was affected by the financial crisis. However, we did not observe a significant impact of the 2011 Great East Japan Earthquake. This is because no railway company was influenced by the earthquake in this operating area, except for JR-EAST. Second, the best-practice railway companies are KEIO and TOKYU. The main reason for their good performance is that both operate in and near the Tokyo metropolitan area, which is densely populated. Third, we found that non-best-practice companies had a larger decrease in passenger kilometers than best-practice companies. This indicates that passengers made fewer long-distance trips because they refrained from inter-prefectural travel during the pandemic. Finally, we found that companies that implement more efficiency improvement measures had higher cost efficiency and they effectively used their customer databases through proactive DX investments in marketing and asset management.

Keywords: COVID-19 pandemic, stochastic frontier analysis, railway sector, cost efficiency

Procedia PDF Downloads 32
27249 Reducing Uncertainty in Climate Projections over Uganda by Numerical Models Using Bias Correction

Authors: Isaac Mugume

Abstract:

Since the beginning of the 21st century, climate change has been an issue due to the reported rise in global temperature and changes in the frequency as well as severity of extreme weather and climatic events. The changing climate has been attributed to rising concentrations of greenhouse gases, including environmental changes such as ecosystems and land-uses. Climatic projections have been carried out under the auspices of the intergovernmental panel on climate change where a couple of models have been run to inform us about the likelihood of future climates. Since one of the major forcings informing the changing climate is emission of greenhouse gases, different scenarios have been proposed and future climates for different periods presented. The global climate models project different areas to experience different impacts. While regional modeling is being carried out for high impact studies, bias correction is less documented. Yet, the regional climate models suffer bias which introduces uncertainty. This is addressed in this study by bias correcting the regional models. This study uses the Weather Research and Forecasting model under different representative concentration pathways and correcting the products of these models using observed climatic data. This study notes that bias correction (e.g., the running-mean bias correction; the best easy systematic estimator method; the simple linear regression method, nearest neighborhood, weighted mean) improves the climatic projection skill and therefore reduce the uncertainty inherent in the climatic projections.

Keywords: bias correction, climatic projections, numerical models, representative concentration pathways

Procedia PDF Downloads 87
27248 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement

Authors: Chao Xu

Abstract:

Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.

Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis

Procedia PDF Downloads 322
27247 Location Uncertainty – A Probablistic Solution for Automatic Train Control

Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland

Abstract:

New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.

Keywords: ERTMS, CBTC, ATP, ATO

Procedia PDF Downloads 386
27246 The Investigation of Oil Price Shocks by Using a Dynamic Stochastic General Equilibrium: The Case of Iran

Authors: Bahram Fathi, Karim Alizadeh, Azam Mohammadbagheri

Abstract:

The aim of this paper is to investigate the role of oil price shocks in explaining business cycles in Iran using a dynamic stochastic general equilibrium approach. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. The model with two shocks produces different values for volatility, but these values have the same ranking as that of the actual data for most variables. In addition, the actual data are close to the ratio of standard deviations to the output obtained from the model with two shocks. The results indicate that productivity shocks are relatively more important to business cycles than the oil shocks. The model with only a productivity shock produces the most similar figures in term of volatility magnitude to that of the actual data. Next, we use the Impulse Response Functions (IRF) to evaluate the capability of the model. The IRF shows no effect of an oil shock on the capital stocks and on labor hours, which is a feature of the model. When the log-linearized system of equations is solved numerically, investment and labor hours were not found to be functions of the oil shock. This research recommends using different techniques to compare the model’s robustness. One method by which to do this is to have all decision variables as a function of the oil shock by inducing the stationary to the model differently. Another method is to impose a bond adjustment cost. This study intends to fill that gap. To achieve this objective, we derive a DSGE model that allows for the world oil price and productivity shocks. Second, we calibrate the model to the Iran economy. Next, we compare the moments from the theoretical model with both single and multiple shocks with that obtained from the actual data to see the extent to which business cycles in Iran can be explained by total oil revenue shock. Then, we use an impulse response function to evaluate the role of world oil price shocks. Finally, I present implications of the findings and interpretations in accordance with economic theory.

Keywords: oil price, shocks, dynamic stochastic general equilibrium, Iran

Procedia PDF Downloads 409
27245 Robust Stabilization of Rotational Motion of Underwater Robots against Parameter Uncertainties

Authors: Riku Hayashida, Tomoaki Hashimoto

Abstract:

This paper provides a robust stabilization method for rotational motion of underwater robots against parameter uncertainties. Underwater robots are expected to be used for various work assignments. The large variety of applications of underwater robots motivates researchers to develop control systems and technologies for underwater robots. Several control methods have been proposed so far for the stabilization of nominal system model of underwater robots with no parameter uncertainty. Parameter uncertainties are considered to be obstacles in implementation of the such nominal control methods for underwater robots. The objective of this study is to establish a robust stabilization method for rotational motion of underwater robots against parameter uncertainties. The effectiveness of the proposed method is verified by numerical simulations.

Keywords: robust control, stabilization method, underwater robot, parameter uncertainty

Procedia PDF Downloads 130
27244 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 250
27243 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker

Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang

Abstract:

The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).

Keywords: inertial navigation, adaptive filtering, star tracker, FOG

Procedia PDF Downloads 54
27242 [Keynote Talk]: Evidence Fusion in Decision Making

Authors: Mohammad Abdullah-Al-Wadud

Abstract:

In the current era of automation and artificial intelligence, different systems have been increasingly keeping on depending on decision-making capabilities of machines. Such systems/applications may range from simple classifiers to sophisticated surveillance systems based on traditional sensors and related equipment which are becoming more common in the internet of things (IoT) paradigm. However, the available data for such problems are usually imprecise and incomplete, which leads to uncertainty in decisions made based on traditional probability-based classifiers. This requires a robust fusion framework to combine the available information sources with some degree of certainty. The theory of evidence can provide with such a method for combining evidence from different (may be unreliable) sources/observers. This talk will address the employment of the Dempster-Shafer Theory of evidence in some practical applications.

Keywords: decision making, dempster-shafer theory, evidence fusion, incomplete data, uncertainty

Procedia PDF Downloads 397
27241 Accelerated Evaluation of Structural Reliability under Tsunami Loading

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

It is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis in view of recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 which brought huge losses of lives and properties. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of a recently proposed moving least squares response surface approach for stochastic sampling and the Subset Simulation algorithm is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface, stochastic simulation, structural reliability tsunami, risk

Procedia PDF Downloads 646
27240 The Impact of Dispatching with Rolling Horizon Control in Sizing Thermal Storage for Solar Tower Plant Participating in Wholesale Spot Electricity Market

Authors: Navid Mohammadzadeh, Huy Truong-Ba, Michael Cholette

Abstract:

The solar tower (ST) plant is a promising technology to exploit large-scale solar irradiation. With thermal energy storage, ST plant has the potential to shift generation to high electricity price periods. However, the size of storage limits the dispatchability of the plant, particularly when it should compete with uncertainty in forecasts of solar irradiation and electricity prices. The purpose of this study is to explore the size of storage when Rolling Horizon Control (RHC) is employed for dispatch scheduling. To this end, RHC is benchmarked against perfect knowledge (PK) forecast and two day-ahead dispatching policies. With optimisation of dispatch planning using PK policy, the optimal achievable profit for a specific size of the storage is determined. A sensitivity analysis using Monte-Carlo simulation is conducted, and the size of storage for RHC and day-ahead policies is determined with the objective of reaching the profit obtained from the PK policy. A case study is conducted for a hypothetical ST plant with thermal storage located in South Australia and intends to dispatch under two market scenarios: 1) fixed price and 2) wholesale spot price. The impact of each individual source of uncertainty on storage size is examined for January and August. The exploration of results shows that dispatching with RH controller reaches optimal achievable profit with ~15% smaller storage compared to that in day-ahead policies. The results of this study may be applied to the CSP plant design procedure.

Keywords: solar tower plant, spot market, thermal storage system, optimized dispatch planning, sensitivity analysis, Monte Carlo simulation

Procedia PDF Downloads 97
27239 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done

Authors: Geir Kirkebøen

Abstract:

Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.

Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis

Procedia PDF Downloads 298
27238 Model of Optimal Centroids Approach for Multivariate Data Classification

Authors: Pham Van Nha, Le Cam Binh

Abstract:

Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.

Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization

Procedia PDF Downloads 176
27237 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study

Authors: Nejoud Alostad, Anup Bora, Prashant Dhote

Abstract:

Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.

Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation

Procedia PDF Downloads 170
27236 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 76
27235 Mean and Volatility Spillover between US Stocks Market and Crude Oil Markets

Authors: Kamel Malik Bensafta, Gervasio Bensafta

Abstract:

The purpose of this paper is to investigate the relationship between oil prices and socks markets. The empirical analysis in this paper is conducted within the context of Multivariate GARCH models, using a transform version of the so-called BEKK parameterization. We show that mean and uncertainty of US market are transmitted to oil market and European market. We also identify an important transmission from WTI prices to Brent Prices.

Keywords: oil volatility, stock markets, MGARCH, transmission, structural break

Procedia PDF Downloads 459