Search results for: generalized likelihood uncertainty estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3912

Search results for: generalized likelihood uncertainty estimation

3402 A Probability Analysis of Construction Project Schedule Using Risk Management Tool

Authors: A. L. Agarwal, D. A. Mahajan

Abstract:

Construction industry tumbled along with other industry/sectors during recent economic crash. Construction business could not regain thereafter and still pass through slowdown phase, resulted many real estate as well as infrastructure projects not completed on schedule and within budget. There are many theories, tools, techniques with software packages available in the market to analyze construction schedule. This study focuses on the construction project schedule and uncertainties associated with construction activities. The infrastructure construction project has been considered for the analysis of uncertainty on project activities affecting project duration and analysis is done using @RISK software. Different simulation results arising from three probability distribution functions are compiled to benefit construction project managers to plan more realistic schedule of various construction activities as well as project completion to document in the contract and avoid compensations or claims arising out of missing the planned schedule.

Keywords: construction project, distributions, project schedule, uncertainty

Procedia PDF Downloads 339
3401 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach

Authors: Riznaldi Akbar

Abstract:

In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.

Keywords: debt crisis, external debt, artificial neural network, ANN

Procedia PDF Downloads 433
3400 Bivariate Generalization of q-α-Bernstein Polynomials

Authors: Tarul Garg, P. N. Agrawal

Abstract:

We propose to define the q-analogue of the α-Bernstein Kantorovich operators and then introduce the q-bivariate generalization of these operators to study the approximation of functions of two variables. We obtain the rate of convergence of these bivariate operators by means of the total modulus of continuity, partial modulus of continuity and the Peetre’s K-functional for continuous functions. Further, in order to study the approximation of functions of two variables in a space bigger than the space of continuous functions, i.e. Bögel space; the GBS (Generalized Boolean Sum) of the q-bivariate operators is considered and degree of approximation is discussed for the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.

Keywords: Bögel continuous, Bögel differentiable, generalized Boolean sum, K-functional, mixed modulus of smoothness

Procedia PDF Downloads 375
3399 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 579
3398 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 324
3397 Magneto-Rheological Damper Based Semi-Active Robust H∞ Control of Civil Structures with Parametric Uncertainties

Authors: Vedat Senol, Gursoy Turan, Anders Helmersson, Vortechz Andersson

Abstract:

In developing a mathematical model of a real structure, the simulation results of the model may not match the real structural response. This is a general problem that arises during dynamic motion of the structure, which may be modeled by means of parameter variations in the stiffness, damping, and mass matrices. These changes in parameters need to be estimated, and the mathematical model is updated to obtain higher control performances and robustness. In this study, a linear fractional transformation (LFT) is utilized for uncertainty modeling. Further, a general approach to the design of an H∞ control of a magneto-rheological damper (MRD) for vibration reduction in a building with mass, damping, and stiffness uncertainties is presented.

Keywords: uncertainty modeling, structural control, MR Damper, H∞, robust control

Procedia PDF Downloads 133
3396 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 137
3395 Lead-Time Estimation Approach Using the Process Capability Index

Authors: Abdel-Aziz M. Mohamed

Abstract:

This research proposes a methodology to estimate the customer order lead time in the supply chain based on the process capability index. The cases when the process output is normally distributed and when it is not are considered. The relationships between the system capability indices in both service and manufacturing applications, delivery system reliability and the percentages of orders delivered after their promised due dates are presented. The proposed method can be used to examine the current process capability to deliver the orders before the promised lead-time. If the system was found to be incapable, the method can be used to help revise the current lead-time to a proper value according to the service reliability level selected by the management. Numerical examples and a case study describing the lead time estimation methodology and testing the system capability of delivering the orders before their promised due date are illustrated.

Keywords: lead-time estimation, process capability index, delivery system reliability, statistical analysis, service achievement index, service quality

Procedia PDF Downloads 553
3394 Uncertainty in Near-Term Global Surface Warming Linked to Pacific Trade Wind Variability

Authors: M. Hadi Bordbar, Matthew England, Alex Sen Gupta, Agus Santoso, Andrea Taschetto, Thomas Martin, Wonsun Park, Mojib Latif

Abstract:

Climate models generally simulate long-term reductions in the Pacific Walker Circulation with increasing atmospheric greenhouse gases. However, over two recent decades (1992-2011) there was a strong intensification of the Pacific Trade Winds that is linked with a slowdown in global surface warming. Using large ensembles of multiple climate models forced by increasing atmospheric greenhouse gas concentrations and starting from different ocean and/or atmospheric initial conditions, we reveal very diverse 20-year trends in the tropical Pacific climate associated with a considerable uncertainty in the globally averaged surface air temperature (SAT) in each model ensemble. This result suggests low confidence in our ability to accurately predict SAT trends over 20-year timescale only from external forcing. We show, however, that the uncertainty can be reduced when the initial oceanic state is adequately known and well represented in the model. Our analyses suggest that internal variability in the Pacific trade winds can mask the anthropogenic signal over a 20-year time frame, and drive transitions between periods of accelerated global warming and temporary slowdown periods.

Keywords: trade winds, walker circulation, hiatus in the global surface warming, internal climate variability

Procedia PDF Downloads 257
3393 Digital Material Characterization Using the Quantum Fourier Transform

Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel

Abstract:

The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.

Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises

Procedia PDF Downloads 68
3392 A Machine Learning-Based Approach to Capture Extreme Rainfall Events

Authors: Willy Mbenza, Sho Kenjiro

Abstract:

Increasing efforts are directed towards a better understanding and foreknowledge of extreme precipitation likelihood, given the adverse effects associated with their occurrence. This knowledge plays a crucial role in long-term planning and the formulation of effective emergency response. However, predicting extreme events reliably presents a challenge to conventional empirical/statistics due to the involvement of numerous variables spanning different time and space scales. In the recent time, Machine Learning has emerged as a promising tool for predicting the dynamics of extreme precipitation. ML techniques enables the consideration of both local and regional physical variables that have a strong influence on the likelihood of extreme precipitation. These variables encompasses factors such as air temperature, soil moisture, specific humidity, aerosol concentration, among others. In this study, we develop an ML model that incorporates both local and regional variables while establishing a robust relationship between physical variables and precipitation during the downscaling process. Furthermore, the model provides valuable information on the frequency and duration of a given intensity of precipitation.

Keywords: machine learning (ML), predictions, rainfall events, regional variables

Procedia PDF Downloads 77
3391 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 48
3390 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Authors: Saheed A. Gbadegeshin

Abstract:

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention

Procedia PDF Downloads 333
3389 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 178
3388 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 348
3387 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen

Abstract:

Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.

Keywords: fatigue damage, FORM, monopile, Monte Carlo, simulation, wind turbine

Procedia PDF Downloads 249
3386 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey

Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ

Abstract:

A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.

Keywords: cointegration, economic growth, periodogram ordinate, tourism

Procedia PDF Downloads 260
3385 Phillips Curve Estimation in an Emerging Economy: Evidence from Sub-National Data of Indonesia

Authors: Harry Aginta

Abstract:

Using Phillips curve framework, this paper seeks for new empirical evidence on the relationship between inflation and output in a major emerging economy. By exploiting sub-national data, the contribution of this paper is threefold. First, it resolves the issue of using on-target national inflation rates that potentially causes weakening inflation-output nexus. This is very relevant for Indonesia as its central bank has been adopting inflation targeting framework based on national consumer price index (CPI) inflation. Second, the study tests the relevance of mining sector in output gap estimation. The test for mining sector is important to control for the effects of mining regulation and nominal effects of coal prices on real economic activities. Third, the paper applies panel econometric method by incorporating regional variation that help to improve model estimation. The results from this paper confirm the strong presence of Phillips curve in Indonesia. Positive output gap that reflects excess demand condition gives rise to the inflation rates. In addition, the elasticity of output gap is higher if the mining sector is excluded from output gap estimation. In addition to inflation adaptation, the dynamics of exchange rate and international commodity price are also found to affect inflation significantly. The results are robust to the alternative measurement of output gap

Keywords: Phillips curve, inflation, Indonesia, panel data

Procedia PDF Downloads 114
3384 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 235
3383 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 307
3382 Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes

Authors: Septimia Sarbu

Abstract:

The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.

Keywords: generalized entropies, Sharma-Mittal entropy rate, Gaussian processes, eigenvalues of the covariance matrix, squeeze theorem

Procedia PDF Downloads 504
3381 Vehicular Emission Estimation of Islamabad by Using Copert-5 Model

Authors: Muhammad Jahanzaib, Muhammad Z. A. Khan, Junaid Khayyam

Abstract:

Islamabad is the capital of Pakistan with the population of 1.365 million people and with a vehicular fleet size of 0.75 million. The vehicular fleet size is growing annually by the rate of 11%. Vehicular emissions are major source of Black carbon (BC). In developing countries like Pakistan, most of the vehicles consume conventional fuels like Petrol, Diesel, and CNG. These fuels are the major emitters of pollutants like CO, CO2, NOx, CH4, VOCs, and particulate matter (PM10). Carbon dioxide and methane are the leading contributor to the global warming with a global share of 9-26% and 4-9% respectively. NOx is the precursor of nitrates which ultimately form aerosols that are noxious to human health. In this study, COPERT (Computer program to Calculate Emissions from Road Transport) was used for vehicular emission estimation in Islamabad. COPERT is a windows based program which is developed for the calculation of emissions from the road transport sector. The emissions were calculated for the year of 2016 include pollutants like CO, NOx, VOC, and PM and energy consumption. The different variable was input to the model for emission estimation including meteorological parameters, average vehicular trip length and respective time duration, fleet configuration, activity data, degradation factor, and fuel effect. The estimated emissions for CO, CH4, CO2, NOx, and PM10 were found to be 9814.2, 44.9, 279196.7, 3744.2 and 304.5 tons respectively.

Keywords: COPERT Model, emission estimation, PM10, vehicular emission

Procedia PDF Downloads 252
3380 The Association between Masculinity and Anxiety in Canadian Men

Authors: Nikk Leavitt, Peter Kellett, Cheryl Currie, Richard Larouche

Abstract:

Background: Masculinity has been associated with poor mental health outcomes in adult men and is colloquially referred to as toxic. Masculinity is traditionally measured using the Male Role Norms Inventory, which examines behaviors that may be common in men but that are themselves associated with poor mental health regardless of gender (e.g., aggressiveness). The purpose of this study was to examine if masculinity is associated with generalized anxiety among men using this inventory vs. a man’s personal definition of it. Method: An online survey collected data from 1,200 men aged 18-65 across Canada in July 2022. Masculinity was measured using: 1) the Male Role Norms Inventory Short Form and 2) by asking men to self-define what being masculine means. Men were then asked to rate the extent they perceived themselves to be masculine on a scale of 1 to 10 based on their definition of the construct. Generalized anxiety disorder was measured using the GAD-7. Multiple linear regression was used to examine associations between each masculinity score and anxiety score, adjusting for confounders. Results: The masculinity score measured using the inventory was positively associated with increased anxiety scores among men (β = 0.02, p < 0.01). Masculinity subscales most strongly correlated with higher anxiety were restrictive emotionality (β = 0.29, p < 0.01) and dominance (β = 0.30, p < 0.01). When traditional masculinity was replaced by a man’s self-rated masculinity score in the model, the reverse association was found, with increasing masculinity resulting in a significantly reduced anxiety score (β = -0.13, p = 0.04). Discussion: These findings highlight the need to revisit the ways in which masculinity is defined and operationalized in research to better understand its impacts on men’s mental health. The findings also highlight the importance of allowing participants to self-define gender-based constructs, given they are fluid and socially constructed.

Keywords: masculinity, generalized anxiety disorder, race, intersectionality

Procedia PDF Downloads 59
3379 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 83
3378 Multi-Subpopulation Genetic Algorithm with Estimation of Distribution Algorithm for Textile Batch Dyeing Scheduling Problem

Authors: Nhat-To Huynh, Chen-Fu Chien

Abstract:

Textile batch dyeing scheduling problem is complicated which includes batch formation, batch assignment on machines, batch sequencing with sequence-dependent setup time. Most manufacturers schedule their orders manually that are time consuming and inefficient. More power methods are needed to improve the solution. Motivated by the real needs, this study aims to propose approaches in which genetic algorithm is developed with multi-subpopulation and hybridised with estimation of distribution algorithm to solve the constructed problem for minimising the makespan. A heuristic algorithm is designed and embedded into the proposed algorithms to improve the ability to get out of the local optima. In addition, an empirical study is conducted in a textile company in Taiwan to validate the proposed approaches. The results have showed that proposed approaches are more efficient than simulated annealing algorithm.

Keywords: estimation of distribution algorithm, genetic algorithm, multi-subpopulation, scheduling, textile dyeing

Procedia PDF Downloads 292
3377 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 119
3376 Location Uncertainty – A Probablistic Solution for Automatic Train Control

Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland

Abstract:

New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.

Keywords: ERTMS, CBTC, ATP, ATO

Procedia PDF Downloads 406
3375 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle

Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu

Abstract:

Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.

Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle

Procedia PDF Downloads 129
3374 An Indoor Positioning System in Wireless Sensor Networks with Measurement Delay

Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang

Abstract:

In the current paper, an indoor positioning system is proposed with consideration of measurement delay. Firstly, an estimation filter with a measurement delay is designed for the indoor positioning mechanism under a weighted least square criterion, which utilizes only finite measurements on the most recent window. The proposed estimation filtering based scheme gives the filtered estimates for position, velocity and acceleration of moving target in real-time, while removing undesired noisy effects and preserving desired moving positions. Secondly, the proposed scheme is shown to have good inherent properties such as unbiasedness, efficiency, time-invariance, deadbeat, and robustness due to the finite memory structure. Finally, computer simulations shows that the performance of the proposed estimation filtering based scheme can outperform to the existing infinite memory filtering based mechanism.

Keywords: indoor positioning system, wireless sensor networks, measurement delay

Procedia PDF Downloads 475
3373 An Algorithm to Compute the State Estimation of a Bilinear Dynamical Systems

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this paper, we introduce a mathematical algorithm which is used for estimating the states in the bilinear systems. This algorithm uses a special linearization of the second-order term by using the best available information about the state of the system. This technique makes our algorithm generalizes the well-known Kalman estimators. The system which is used here is of the bilinear class, the evolution of this model is linear-bilinear in the state of the system. Our algorithm can be used with linear and bilinear systems. We also here introduced a real application for the new algorithm to prove the feasibility and the efficiency for it.

Keywords: estimation algorithm, bilinear systems, Kakman filter, second order linearization

Procedia PDF Downloads 479