Search results for: stochastic uncertainty analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27671

Search results for: stochastic uncertainty analysis

27311 Reducing the Computational Overhead of Metaheuristics Parameterization with Exploratory Landscape Analysis

Authors: Iannick Gagnon, Alain April

Abstract:

The performance of a metaheuristic on a given problem class depends on the class itself and the choice of parameters. Parameter tuning is the most time-consuming phase of the optimization process after the main calculations and it often nullifies the speed advantage of metaheuristics over traditional optimization algorithms. Several off-the-shelf parameter tuning algorithms are available, but when the objective function is expensive to evaluate, these can be prohibitively expensive to use. This paper presents a surrogate-like method for finding adequate parameters using fitness landscape analysis on simple benchmark functions and real-world objective functions. The result is a simple compound similarity metric based on the empirical correlation coefficient and a measure of convexity. It is then used to find the best benchmark functions to serve as surrogates. The near-optimal parameter set is then found using fractional factorial design. The real-world problem of NACA airfoil lift coefficient maximization is used as a preliminary proof of concept. The overall aim of this research is to reduce the computational overhead of metaheuristics parameterization.

Keywords: metaheuristics, stochastic optimization, particle swarm optimization, exploratory landscape analysis

Procedia PDF Downloads 124
27310 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 223
27309 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model

Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh

Abstract:

Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).

Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river

Procedia PDF Downloads 270
27308 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid

Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong

Abstract:

Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.

Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function

Procedia PDF Downloads 75
27307 Uncertainty of the Brazilian Earth System Model for Solar Radiation

Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Deivid Pires, Rafael Haag, Elton Gimenez Rossini

Abstract:

This study evaluated the uncertainties involved in the solar radiation projections generated by the Brazilian Earth System Model (BESM) of the Weather and Climate Prediction Center (CPTEC) belonging to Coupled Model Intercomparison Phase 5 (CMIP5), with the aim of identifying efficiency in the projections for solar radiation of said model and in this way establish the viability of its use. Two different scenarios elaborated by Intergovernmental Panel on Climate Change (IPCC) were evaluated: RCP 4.5 (with more optimistic contour conditions) and 8.5 (with more pessimistic initial conditions). The method used to verify the accuracy of the present model was the Nash coefficient and the Statistical bias, as it better represents these atmospheric patterns. The BESM showed a tendency to overestimate the data ​​of solar radiation projections in most regions of the state of Rio Grande do Sul and through the validation methods adopted by this study, BESM did not present a satisfactory accuracy.

Keywords: climate changes, projections, solar radiation, uncertainty

Procedia PDF Downloads 223
27306 Reverse Logistics End of Life Products Acquisition and Sorting

Authors: Badli Shah Mohd Yusoff, Khairur Rijal Jamaludin, Rozetta Dollah

Abstract:

The emerging of reverse logistics and product recovery management is an important concept in reconciling economic and environmental objectives through recapturing values of the end of life product returns. End of life products contains valuable modules, parts, residues and materials that can create value if recovered efficiently. The main objective of this study is to explore and develop a model to recover as much of the economic value as reasonably possible to find the optimality of return acquisition and sorting to meet demand and maximize profits over time. In this study, the benefits that can be obtained for remanufacturer is to develop demand forecasting of used products in the future with uncertainty of returns and quality of products. Formulated based on a generic disassembly tree, the proposed model focused on three reverse logistics activity, namely refurbish, remanufacture and disposal incorporating all plausible means quality levels of the returns. While stricter sorting policy, constitute to the decrease amount of products to be refurbished or remanufactured and increases the level of discarded products. Numerical experiments carried out to investigate the characteristics and behaviour of the proposed model with mathematical programming model using Lingo 16.0 for medium-term planning of return acquisition, disassembly (refurbish or remanufacture) and disposal activities. Moreover, the model seeks an analysis a number of decisions relating to trade off management system to maximize revenue from the collection of use products reverse logistics services through refurbish and remanufacture recovery options. The results showed that full utilization in the sorting process leads the system to obtain less quantity from acquisition with minimal overall cost. Further, sensitivity analysis provides a range of possible scenarios to consider in optimizing the overall cost of refurbished and remanufactured products.

Keywords: core acquisition, end of life, reverse logistics, quality uncertainty

Procedia PDF Downloads 266
27305 Understanding the Impact of Climate Change on Farmer's Technical Efficiency in Mali

Authors: Christelle Tchoupé Makougoum

Abstract:

In the context of agriculture, differences across localities in term of climate change can create systematic variation among farmers technical efficiency. Failure to account for climate variability could lead to wrong conclusions about farmers’ technical efficiency and also it could bias the ranking of farmers according to their managerial performance. The literature on agricultural productivity has given little attention to this issue whereas it is necessary for establishing to what extent climate affects farmers efficiency. This article contributes to the preview literature by two ways. First, it proposed a new econometric model that accounting for the climate change influences on technical efficiency in the specific area of agriculture. Second it estimates the inefficiency due to climate change and the real managerial performance of Malian farmers. Using the Mali’s data from agricultural census and CRU TS3 climatic database we implemented an adjusted stochastic frontier methodology to account for the impact of environmental factors. The results yield three main findings. First, instability in temperatures and rainfall decreases technical efficiency on average. Second, the climate change modifies the classification of the farmers according to their efficiency scores. Thirdly it is noted that, although climate changes are partly responsible for the deviation from the border, the capacity of farmers to combine inputs into the optimal proportion is more to undermine. The study concluded that improving farmer efficiency should include fostering their resilience to climate change.

Keywords: agriculture, climate change, stochastic production function, technical efficiency

Procedia PDF Downloads 486
27304 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.

Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis

Procedia PDF Downloads 139
27303 Smart Production Planning: The Case of Aluminium Foundry

Authors: Samira Alvandi

Abstract:

In the context of the circular economy, production planning aims to eliminate waste and emissions and maximize resource efficiency. Historically production planning is challenged through arrays of uncertainty and complexity arising from the interdependence and variability of products, processes, and systems. Manufacturers worldwide are facing new challenges in tackling various environmental issues such as climate change, resource depletion, and land degradation. In managing the inherited complexity and uncertainty and yet maintaining profitability, the manufacturing sector is in need of a holistic framework that supports energy efficiency and carbon emission reduction schemes. The proposed framework addresses the current challenges and integrates simulation modeling with optimization for finding optimal machine-job allocation to maximize throughput and total energy consumption while minimizing lead time. The aluminium refinery facility in western Sydney, Australia, is used as an exemplar to validate the proposed framework.

Keywords: smart production planning, simulation-optimisation, energy aware capacity planning, energy intensive industries

Procedia PDF Downloads 38
27302 RGB-D SLAM Algorithm Based on pixel level Dense Depth Map

Authors: Hao Zhang, Hongyang Yu

Abstract:

Scale uncertainty is a well-known challenging problem in visual SLAM. Because RGB-D sensor provides depth information, RGB-D SLAM improves this scale uncertainty problem. However, due to the limitation of physical hardware, the depth map output by RGB-D sensor usually contains a large area of missing depth values. These missing depth information affect the accuracy and robustness of RGB-D SLAM. In order to reduce these effects, this paper completes the missing area of the depth map output by RGB-D sensor and then fuses the completed dense depth map into ORB SLAM2. By adding the process of obtaining pixel-level dense depth maps, a better RGB-D visual SLAM algorithm is finally obtained. In the process of obtaining dense depth maps, a deep learning model of indoor scenes is adopted. Experiments are conducted on public datasets and real-world environments of indoor scenes. Experimental results show that the proposed SLAM algorithm has better robustness than ORB SLAM2.

Keywords: RGB-D, SLAM, dense depth, depth map

Procedia PDF Downloads 109
27301 Fragility Analysis of Weir Structure Subjected to Flooding Water Damage

Authors: Oh Hyeon Jeon, WooYoung Jung

Abstract:

In this study, seepage analysis was performed by the level difference between upstream and downstream of weir structure for safety evaluation of weir structure against flooding. Monte Carlo Simulation method was employed by considering the probability distribution of the adjacent ground parameter, i.e., permeability coefficient of weir structure. Moreover, by using a commercially available finite element program (ABAQUS), modeling of the weir structure is carried out. Based on this model, the characteristic of water seepage during flooding was determined at each water level with consideration of the uncertainty of their corresponding permeability coefficient. Subsequently, fragility function could be constructed based on this response from numerical analysis; this fragility function results could be used to determine the weakness of weir structure subjected to flooding disaster. They can also be used as a reference data that can comprehensively predict the probability of failur,e and the degree of damage of a weir structure.

Keywords: weir structure, seepage, flood disaster fragility, probabilistic risk assessment, Monte-Carlo simulation, permeability coefficient

Procedia PDF Downloads 324
27300 Payment Subsidies for Environmentally-Friendly Agriculture on Rice Production in Japan

Authors: Danielle Katrina Santos, Koji Shimada

Abstract:

Environmentally-friendly agriculture has been promoted for over two decades as a response to the environmental challenges brought by climate change and biological loss. Located above the equator, it is possible that Japan may benefit from future climate change, yet Japan is also a rarely developed country located in the Asian Monsoon climate region, making it vulnerable to the impacts of climate change. In this regard, the Japanese government has initiated policies to adapt to the adverse effects of climate change through the promotion and popularization of environmentally-friendly farming practices. This study aims to determine profit efficiency among environmentally-friendly rice farmers in Shiga Prefecture using the Stochastic Frontier Approach. A cross-sectional survey was conducted among 66 farmers from top rice-producing cities through a structured questionnaire. Results showed that the gross farm income of environmentally-friendly rice farmers was higher by JPY 316,223.00/ha. Production costs were also found to be higher among environmentally-friendly rice farmers, especially on labor costs, which accounted for 32% of the total rice production cost. The resulting net farm income of environmentally-friendly rice farmers was only higher by JPY 18,044/ha. Results from the stochastic frontier analysis further showed that the profit efficiency of conventional farmers was only 69% as compared to environmentally-friendly rice farmers who had a profit efficiency of 76%. Furthermore, environmentally-friendly agriculture participation, other types of subsidy, educational level, and farm size were significant factors positively influencing profit efficiency. The study concluded that substitution of environmentally-friendly agriculture for conventional rice farming would result in an increased profit efficiency due to the direct payment subsidy and price premium received. The direct government policies that would strengthen the popularization of environmentally-friendly agriculture to increase the production of environmentally-friendly products and reduce pollution load to the Lake Biwa ecosystem.

Keywords: profit efficiency, environmentally-friendly agriculture, rice farmers, direct payment subsidies

Procedia PDF Downloads 120
27299 Syntactic Ambiguity and Syntactic Analysis: Transformational Grammar Approach

Authors: Olufemi Olupe

Abstract:

Within linguistics, various approaches have been adopted to the study of language. One of such approaches is the syntax. The syntax is an aspect of the grammar of the language which deals with how words are put together to form phrases and sentences and how such structures are interpreted in language. Ambiguity, which is also germane in this discourse is about the uncertainty of meaning as a result of the possibility of a phrase or sentence being understood and interpreted in more than one way. In the light of the above, this paper attempts a syntactic study of syntactic ambiguities in The English Language, using the Transformational Generative Grammar (TGG) Approach. In doing this, phrases and sentences were raised with each description followed by relevant analysis. Finding in the work reveals that ambiguity cannot always be disambiguated by the means of syntactic analysis alone without recourse to semantic interpretation. The further finding shows that some syntactical ambiguities structures cannot be analysed on two surface structures in spite of the fact that there are more than one deep structures. The paper concludes that in as much as ambiguity remains in language; it will continue to pose a problem of understanding to a second language learner. Users of English as a second language, must, however, make a conscious effort to avoid its usage to achieve effective communication.

Keywords: language, syntax, semantics, morphology, ambiguity

Procedia PDF Downloads 360
27298 Human Performance Evaluating of Advanced Cardiac Life Support Procedure Using Fault Tree and Bayesian Network

Authors: Shokoufeh Abrisham, Seyed Mahmoud Hossieni, Elham Pishbin

Abstract:

In this paper, a hybrid method based on the fault tree analysis (FTA) and Bayesian networks (BNs) are employed to evaluate the team performance quality of advanced cardiac life support (ACLS) procedures in emergency department. According to American Heart Association (AHA) guidelines, a category relying on staff action leading to clinical incidents and also some discussions with emergency medicine experts, a fault tree model for ACLS procedure is obtained based on the human performance. The obtained FTA model is converted into BNs, and some different scenarios are defined to demonstrate the efficiency and flexibility of the presented model of BNs. Also, a sensitivity analysis is conducted to indicate the effects of team leader presence and uncertainty knowledge of experts on the quality of ACLS. The proposed model based on BNs shows that how the results of risk analysis can be closed to reality comparing to the obtained results based on only FTA in medical procedures.

Keywords: advanced cardiac life support, fault tree analysis, Bayesian belief networks, numan performance, healthcare systems

Procedia PDF Downloads 120
27297 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting

Authors: Andres F. Ramirez, Carlos F. Valencia

Abstract:

The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.

Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation

Procedia PDF Downloads 294
27296 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 74
27295 Optimizing the Insertion of Renewables in the Colombian Power Sector

Authors: Felipe Henao, Yeny Rodriguez, Juan P. Viteri, Isaac Dyner

Abstract:

Colombia is rich in natural resources and greatly focuses on the exploitation of water for hydroelectricity purposes. Alternative cleaner energy sources, such as solar and wind power, have been largely neglected despite: a) its abundance, b) the complementarities between hydro, solar and wind power, and c) the cost competitiveness of renewable technologies. The current limited mix of energy sources creates considerable weaknesses for the system, particularly when facing extreme dry weather conditions, such as El Niño event. In the past, El Niño have exposed the truly consequences of a system heavily dependent on hydropower, i.e. loss of power supply, high energy production costs, and loss of overall competitiveness for the country. Nonetheless, it is expected that the participation of hydroelectricity will increase in the near future. In this context, this paper proposes a stochastic lineal programming model to optimize the insertion of renewable energy systems (RES) into the Colombian electricity sector. The model considers cost-based generation competition between traditional energy technologies and alternative RES. This work evaluates the financial, environmental, and technical implications of different combinations of technologies. Various scenarios regarding the future evolution of costs of the technologies are considered to conduct sensitivity analysis of the solutions – to assess the extent of the participation of the RES in the Colombian power sector. Optimization results indicate that, even in the worst case scenario, where costs remain constant, the Colombian power sector should diversify its portfolio of technologies and invest strongly in solar and wind power technologies. The diversification through RES will contribute to make the system less vulnerable to extreme weather conditions, reduce the overall system costs, cut CO2 emissions, and decrease the chances of having national blackout events in the future. In contrast, the business as usual scenario indicates that the system will turn more costly and less reliable.

Keywords: energy policy and planning, stochastic programming, sustainable development, water management

Procedia PDF Downloads 267
27294 Analysis of Network Performance Using Aspect of Quantum Cryptography

Authors: Nisarg A. Patel, Hiren B. Patel

Abstract:

Quantum cryptography is described as a point-to-point secure key generation technology that has emerged in recent times in providing absolute security. Researchers have started studying new innovative approaches to exploit the security of Quantum Key Distribution (QKD) for a large-scale communication system. A number of approaches and models for utilization of QKD for secure communication have been developed. The uncertainty principle in quantum mechanics created a new paradigm for QKD. One of the approaches for use of QKD involved network fashioned security. The main goal was point-to-point Quantum network that exploited QKD technology for end-to-end network security via high speed QKD. Other approaches and models equipped with QKD in network fashion are introduced in the literature as. A different approach that this paper deals with is using QKD in existing protocols, which are widely used on the Internet to enhance security with main objective of unconditional security. Our work is towards the analysis of the QKD in Mobile ad-hoc network (MANET).

Keywords: cryptography, networking, quantum, encryption and decryption

Procedia PDF Downloads 145
27293 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 76
27292 Multivariate Rainfall Disaggregation Using MuDRain Model: Malaysia Experience

Authors: Ibrahim Suliman Hanaish

Abstract:

Disaggregation daily rainfall using stochastic models formulated based on multivariate approach (MuDRain) is discussed in this paper. Seven rain gauge stations are considered in this study for different distances from the referred station starting from 4 km to 160 km in Peninsular Malaysia. The hourly rainfall data used are covered the period from 1973 to 2008 and July and November months are considered as an example of dry and wet periods. The cross-correlation among the rain gauges is considered for the available hourly rainfall information at the neighboring stations or not. This paper discussed the applicability of the MuDRain model for disaggregation daily rainfall to hourly rainfall for both sources of cross-correlation. The goodness of fit of the model was based on the reproduction of fitting statistics like the means, variances, coefficients of skewness, lag zero cross-correlation of coefficients and the lag one auto correlation of coefficients. It is found the correlation coefficients based on extracted correlations that was based on daily are slightly higher than correlations based on available hourly rainfall especially for neighboring stations not more than 28 km. The results showed also the MuDRain model did not reproduce statistics very well. In addition, a bad reproduction of the actual hyetographs comparing to the synthetic hourly rainfall data. Mean while, it is showed a good fit between the distribution function of the historical and synthetic hourly rainfall. These discrepancies are unavoidable because of the lowest cross correlation of hourly rainfall. The overall performance indicated that the MuDRain model would not be appropriate choice for disaggregation daily rainfall.

Keywords: rainfall disaggregation, multivariate disaggregation rainfall model, correlation, stochastic model

Procedia PDF Downloads 478
27291 Sensitivity Analysis of Interference of Localised Corrosion on Bending Capacity of a Corroded RC Beam

Authors: Mohammad Mahdi Kioumarsi

Abstract:

In this paper, using the response surface method (RSM), tornado diagram method and non-linear finite element analysis, the effect of four parameters on residual bending capacity of a corroded RC beam was investigated. The parameters considered are amount of localised cross section reduction, ratio of pit distance on adjacent bars to rebar distance, concrete compressive strength, and rebar tensile strength. The focus is on the influence on the bending ultimate limit state. Based on the obtained results, the effects of the ratio of pit distance to rebar distance (Lp⁄Lr) and the ratio of the localised cross section reduction to the original area of the rebar (Apit⁄A0) were found significant. The interference of localised corrosion on adjacent reinforcement bars reduces the bending capacity of under-reinforced concrete beam. Using the sensitivity analysis could lead to recognize uncertainty parameters, which have the most influences on the performance of the structure.

Keywords: localised corrosion, concrete beam, sensitivity analyses, ultimate capacity

Procedia PDF Downloads 233
27290 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.

Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty

Procedia PDF Downloads 89
27289 Secrecy Analysis in Downlink Cellular Networks in the Presence of D2D Pairs and Hardware Impairment

Authors: Mahdi Rahimi, Mohammad Mahdi Mojahedian, Mohammad Reza Aref

Abstract:

In this paper, a cellular communication scenario with a transmitter and an authorized user is considered to analyze its secrecy in the face of eavesdroppers and the interferences propagated unintentionally through the communication network. It is also assumed that some D2D pairs and eavesdroppers are randomly located in the cell. Assuming hardware impairment, perfect connection probability is analytically calculated, and upper bound is provided for the secrecy outage probability. In addition, a method based on random activation of D2Ds is proposed to improve network security. Finally, the analytical results are verified by simulations.

Keywords: physical layer security, stochastic geometry, device-to-device, hardware impairment

Procedia PDF Downloads 145
27288 Application of Fuzzy TOPSIS in Evaluating Green Transportation Options for Dhaka Megacity

Authors: Md. Moniruzzaman, Thirayoot Limanond

Abstract:

Being the most visible indicator, the transport system of a city points out how developed the city is. Dhaka megacity holds a mixed composition of motorized and non-motorized modes of transport and the number of vehicle figure is escalating over times. And this obviously poses associated environmental costs like air pollution, noise etc. which is degrading the quality of life in the city. Eventually sustainable transport or more importantly green transport from environmental point of view has become a prime choice to the transport professionals in order to cope up the crisis. Currently the city authority is planning to execute such sustainable transport systems that could serve the pressing demand of the present and meet the future needs effectively. This study focuses on the selection and evaluation of green transportation systems among potential alternatives on a priority basis. In this paper, Fuzzy TOPSIS - a multi-criteria decision method is presented to find out the most prioritized alternative. In the first step, Twenty-one individual specific criteria for sustainability assessment are selected. In the following step, experts provide linguistic ratings to the potential alternatives with respect to the selected criteria. The approach is used to generate aggregate scores for sustainability assessment and selection of the best alternative. In the third step, a sensitivity analysis is performed to understand the influence of criteria weights on the decision making process. The key strength of fuzzy TOPSIS approach is its practical applicability having a generation of good quality solution even under uncertainty.

Keywords: green transport, multi-criteria decision approach, urban transportation system, sustainability assessment, fuzzy theory, uncertainty

Procedia PDF Downloads 266
27287 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 264
27286 Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays

Authors: Sabri Arik

Abstract:

In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.

Keywords: neural networks, delayed systems, lyapunov functionals, stability analysis

Procedia PDF Downloads 500
27285 Determining the Effects of Wind-Aided Midge Movement on the Probability of Coexistence of Multiple Bluetongue Virus Serotypes in Patchy Environments

Authors: Francis Mugabi, Kevin Duffy, Joseph J. Y. T Mugisha, Obiora Collins

Abstract:

Bluetongue virus (BTV) has 27 serotypes, with some of them coexisting in patchy (different) environments, which make its control difficult. Wind-aided midge movement is a known mechanism in the spread of BTV. However, its effects on the probability of coexistence of multiple BTV serotypes are not clear. Deterministic and stochastic models for r BTV serotypes in n discrete patches connected by midge and/or cattle movement are formulated and analyzed. For the deterministic model without midge and cattle movement, using the comparison principle, it is shown that if the patch reproduction number R0 < 1, i=1,2,...,n, j=1,2,...,r, all serotypes go extinct. If R^j_i0>1, competitive exclusion takes place. Using numerical simulations, it is shown that when the n patches are connected by midge movement, coexistence takes place. To account for demographic and movement variability, the deterministic model is transformed into a continuous-time Markov chain stochastic model. Utilizing a multitype branching process, it is shown that the midge movement can have a large effect on the probability of coexistence of multiple BTV serotypes. The probability of coexistence can be brought to zero when the control interventions that directly kill the adult midges are applied. These results indicate the significance of wind-aided midge movement and vector control interventions on the coexistence and control of multiple BTV serotypes in patchy environments.

Keywords: bluetongue virus, coexistence, multiple serotypes, midge movement, branching process

Procedia PDF Downloads 121
27284 Neural Correlates of Decision-Making Under Ambiguity and Conflict

Authors: Helen Pushkarskaya, Michael Smithson, Jane E. Joseph, Christine Corbly, Ifat Levy

Abstract:

Studies of decision making under uncertainty generally focus on imprecise information about outcome probabilities (“ambiguity”). It is not clear, however, whether conflicting information about outcome probabilities affects decision making in the same manner as ambiguity does. Here we combine functional Magnetic Resonance Imaging (fMRI) and a simple gamble design to study this question. In this design, the levels of ambiguity and conflict are parametrically varied, and ambiguity and conflict gambles are matched on both expected value and variance. Behaviorally, participants avoided conflict more than ambiguity, and attitudes toward ambiguity and conflict did not correlate across subjects. Neurally, regional brain activation was differentially modulated by ambiguity level and aversion to ambiguity and by conflict level and aversion to conflict. Activation in the medial prefrontal cortex was correlated with the level of ambiguity and with ambiguity aversion, whereas activation in the ventral striatum was correlated with the level of conflict and with conflict aversion. This novel double dissociation indicates that decision makers process imprecise and conflicting information differently, a finding that has important implications for basic and clinical research.

Keywords: decision making, uncertainty, ambiguity, conflict, fMRI

Procedia PDF Downloads 531
27283 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation

Authors: Calorine Twebaze, Jesca Balinga

Abstract:

Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.

Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches

Procedia PDF Downloads 27
27282 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 235