Search results for: Hidden Markov model
17079 Horizontal Development of Built-up Area and Its Impacts on the Agricultural Land of Peshawar City District (1991-2014)
Authors: Pukhtoon Yar
Abstract:
Peshawar City is experiencing a rapid spatial urban growth primarily as a result of high rate of urbanization along with economic development. This paper was designed to understand the impacts of urbanization on agriculture land use change by particularly focusing on land use change trajectories from the past (1991-2014). We used Landsat imageries (30 meters) for1991along with Spot images (2.5 meters) for year 2014. . The ground truthing of the satellite data was performed by collecting information from Peshawar Development Authority, revenue department, real estate agents and interviews with the officials of city administration. The temporal satellite images were processed by applying supervised maximum likelihood classification technique in ArcGIS 9.3. The procedure resulted into five main classes of land use i.e. built-up area, farmland, barren land, cultivable-wasteland and water bodies. The analysis revealed that, in Peshawar City the built-up environment has been doubled from 8.1 percent in 1991 to over 18.2 percent in 2014 by predominantly encroaching land producing food. Furthermore, the CA-Markov Model predicted that the area under impervious surfaces would continue to flourish during the next three decades. This rapid increase in built-up area is accredited to the lack of proper land use planning and management, which has caused chaotic urban sprawl with detrimental social and environmental consequences.Keywords: Urban Expansion, Land use, GIS, Remote Sensing, Markov Model, Peshawar City
Procedia PDF Downloads 18617078 Stability Analysis of Green Coffee Export Markets of Ethiopia: Markov-Chain Analysis
Authors: Gabriel Woldu, Maria Sassi
Abstract:
Coffee performs a pivotal role in Ethiopia's GDP, revenue, employment, domestic demand, and export earnings. Ethiopia's coffee production and exports show high variability in the amount of production and export earnings. Despite being the continent's fifth-largest coffee producer, Ethiopia has not developed its ability to shine as a major exporter in the globe's green coffee exports. Ethiopian coffee exports were not stable and had high volume and earnings fluctuations. The main aim of this study was to analyze the dynamics of the export of coffee variation to different importing nations using a first-order Markov Chain model. 14 years of time-series data has been used to examine the direction and structural change in the export of coffee. A compound annual growth rate (CAGR) was used to determine the annual growth rate in the coffee export quantity, value, and per-unit price over the study period. The major export markets for Ethiopian coffee were Germany, Japan, and the USA, which were more stable, while countries such as France, Italy, Belgium, and Saudi Arabia were less stable and had low retention rates for Ethiopian coffee. The study, therefore, recommends that Ethiopia should again revitalize its market to France, Italy, Belgium, and Saudi Arabia, as these countries are the major coffee-consuming countries in the world to boost its export stake to the global coffee markets in the future. In order to further enhance export stability, the Ethiopian Government and other stakeholders in the coffee sector should have to work on reducing the volatility of coffee output and exports in order to improve production and quality efficiency, so that stabilize markets as well as to make the product attractive and price competitive in the importing countries.Keywords: coffee, CAGR, Markov chain, direction of trade, Ethiopia
Procedia PDF Downloads 13917077 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29717076 Cooperative Communication of Energy Harvesting Synchronized-OOK IR-UWB Based Tags
Authors: M. A. Mulatu, L. C. Chang, Y. S. Han
Abstract:
Energy harvesting tags with cooperative communication capabilities are emerging as possible infrastructure for internet of things (IoT) applications. This paper studies about the \ cooperative transmission strategy for a network of energy harvesting active networked tags (EnHANTs), that is adapted to the available energy resource and identification request. We consider a network of EnHANT-equipped objects to communicate with the destination either directly or by cooperating with neighboring objects. We formulate the the problem as a Markov decision process (MDP) under synchronised On/Off keying (S-OOK) pulse modulation format. The simulation results are provided to show the the performance of the cooperative transmission policy and compared against the greedy and conservative policies of single-link transmission.Keywords: cooperative communication, transmission strategy, energy harvesting, Markov decision process, value iteration
Procedia PDF Downloads 49217075 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration
Authors: Nooshin Salari, Viliam Makis
Abstract:
In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.Keywords: reliability, maintenance optimization, semi-Markov decision process, production
Procedia PDF Downloads 16517074 Quantifying Spatiotemporal Patterns of Past and Future Urbanization Trends in El Paso, Texas and Their Impact on Electricity Consumption
Authors: Joanne Moyer
Abstract:
El Paso, Texas is a southwest border city that has experienced continuous growth within the last 15-years. Understanding the urban growth trends and patterns using data from the National Land Cover Database (NLCD) and landscape metrics, provides a quantitative description of growth. Past urban growth provided a basis to predict 2031 future land-use for El Paso using the CA-Markov model. As a consequence of growth, an increase in demand of resources follows. Using panel data analysis, an understanding of the relation between landscape metrics and electricity consumption is further analyzed. The studies’ findings indicate that past growth focused within three districts within the City of El Paso. The landscape metrics suggest as the city has grown, fragmentation has decreased. Alternatively, the landscape metrics for the projected 2031 land-use indicates possible fragmentation within one of these districts. Panel data suggests electricity consumption and mean patch area landscape metric are positively correlated. The study provides local decision makers to make informed decisions for policies and urban planning to ensure a future sustainable community.Keywords: landscape metrics, CA-Markov, El Paso, Texas, panel data
Procedia PDF Downloads 14417073 Computerized Scoring System: A Stethoscope to Understand Consumer's Emotion through His or Her Feedback
Authors: Chen Yang, Jun Hu, Ping Li, Lili Xue
Abstract:
Most companies pay careful attention to consumer feedback collection, so it is popular to find the ‘feedback’ button of all kinds of mobile apps. Yet it is much more changeling to analyze these feedback texts and to catch the true feelings of a consumer regarding either a problem or a complimentary of consumers who hands out the feedback. Especially to the Chinese content, it is possible that; in one context the Chinese feedback expresses positive feedback, but in the other context, the same Chinese feedback may be a negative one. For example, in Chinese, the feedback 'operating with loudness' works well with both refrigerator and stereo system. Apparently, this feedback towards a refrigerator shows negative feedback; however, the same feedback is positive towards a stereo system. By introducing Bradley, M. and Lang, P.'s Affective Norms for English Text (ANET) theory and Bucci W.’s Referential Activity (RA) theory, we, usability researchers at Pingan, are able to decipher the feedback and to find the hidden feelings behind the content. We subtract 2 disciplines ‘valence’ and ‘dominance’ out of 3 of ANET and 2 disciplines ‘concreteness’ and ‘specificity’ out of 4 of RA to organize our own rating system with a scale of 1 to 5 points. This rating system enables us to judge the feelings/emotion behind each feedback, and it works well with both single word/phrase and a whole paragraph. The result of the rating reflects the strength of the feeling/emotion of the consumer when he/she is typing the feedback. In our daily work, we first require a consumer to answer the net promoter score (NPS) before writing the feedback, so we can determine the feedback is positive or negative. Secondly, we code the feedback content according to company problematic list, which contains 200 problematic items. In this way, we are able to collect the data that how many feedbacks left by the consumer belong to one typical problem. Thirdly, we rate each feedback based on the rating system mentioned above to illustrate the strength of the feeling/emotion when our consumer writes the feedback. In this way, we actually obtain two kinds of data 1) the portion, which means how many feedbacks are ascribed into one problematic item and 2) the severity, how strong the negative feeling/emotion is when the consumer is writing this feedback. By crossing these two, and introducing the portion into X-axis and severity into Y-axis, we are able to find which typical problem gets the high score in both portion and severity. The higher the score of a problem has, the more urgent a problem is supposed to be solved as it means more people write stronger negative feelings in feedbacks regarding this problem. Moreover, by introducing hidden Markov model to program our rating system, we are able to computerize the scoring system and are able to process thousands of feedback in a short period of time, which is efficient and accurate enough for the industrial purpose.Keywords: computerized scoring system, feeling/emotion of consumer feedback, referential activity, text mining
Procedia PDF Downloads 17717072 What Lies Beneath: Kanti Shah’s Children of Midnight
Authors: Vibhushan Subba
Abstract:
B-movies are almost always ‘glanced over’, ‘swept beneath’, ‘hidden from’ and ‘locked away’ to live a secret life; a life that exists but enjoys only a mummified existence behind layers of protective covering. They are more often than not discarded as ‘trash’, ‘sleaze’, ‘porn’ and put down for their ‘bad taste’ or at least that has been the case in India. With the art film entering the realm of high art, the popular and the mainstream has been increasingly equated with the A grade Bollywood film. This leaves the B-movie to survive as a degraded cultural artifact on the fringes of the mainstream. Kanti Shah’s films are part of a secret, traversing the libidinal circuits of the B and C grade through history. His films still circulate like a corporeal reminder of the forbidden and that which is taboo, like a hidden fracture that threatens to split open bourgeois respectability. Seeking to find answers to an aesthetic that has been rejected and hidden, this paper looks at three films of Kanti Shah to see how the notion of taboo, censorship and the unseen coincide, how they operate in the domain of his cinema and try and understand a form that draws our attention to the subterranean forces at work.Keywords: B-movies, trash, taboo, censorship
Procedia PDF Downloads 46217071 The First Language of Humanity is Body Language Neither Mother or Native Language
Authors: Badriah Khaleel
Abstract:
Language acquisition is one of the most striking aspects of human development. It is a startling feat, which has engrossed the attention of linguists for generations. The present study will explore the hidden identities and attributes of nonverbal gestures. The current research will reflect the significant role of body language as not mere body gestures or facial expressions but as the first language of humanity.Keywords: a startling feat, a new horizon for linguists to rethink, explore the hidden identities and attributes of non-verbal gestures, English as a third language, the first language of humanity
Procedia PDF Downloads 50517070 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets
Authors: Selin Guney, Andres Riquelme
Abstract:
Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.Keywords: commodity, forecast, fuzzy, Markov
Procedia PDF Downloads 21817069 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes
Authors: Ipek Kivanc, Demet Ozgur-Unluakin
Abstract:
Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes
Procedia PDF Downloads 13617068 Statistical Design of Synthetic VP X-bar Control Chat Using Markov Chain Approach
Authors: Ali Akbar Heydari
Abstract:
Control charts are an important tool of statistical quality control. Thesecharts are used to detect and eliminate unwanted special causes of variation that occurred during aperiod of time. The design and operation of control charts require the determination of three design parameters: the sample size (n), the sampling interval (h), and the width coefficient of control limits (k). Thevariable parameters (VP) x-bar controlchart is the x-barchart in which all the design parameters vary between twovalues. These values are a function of the most recent process information. In fact, in the VP x-bar chart, the position of each sample point on the chart establishes the size of the next sample and the timeof its sampling. The synthetic x-barcontrol chartwhich integrates the x-bar chart and the conforming run length (CRL) chart, provides significant improvement in terms of detection power over the basic x-bar chart for all levels of mean shifts. In this paper, we introduce the syntheticVP x-bar control chart for monitoring changes in the process mean. To determine the design parameters, we used a statistical design based on the minimum out of control average run length (ARL) criteria. The optimal chart parameters of the proposed chart are obtained using the Markov chain approach. A numerical example is also done to show the performance of the proposed chart and comparing it with the other control charts. The results show that our proposed syntheticVP x-bar controlchart perform better than the synthetic x-bar controlchart for all shift parameter values. Also, the syntheticVP x-bar controlchart perform better than the VP x-bar control chart for the moderate or large shift parameter values.Keywords: control chart, markov chain approach, statistical design, synthetic, variable parameter
Procedia PDF Downloads 15517067 Orthogonal Basis Extreme Learning Algorithm and Function Approximation
Abstract:
A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.Keywords: neural network, orthogonal basis extreme learning, function approximation
Procedia PDF Downloads 53617066 Pricing and Economic Benefits of Commercial Insurance Incorporated into Home-based Hospice Care
Authors: Lie-Fen Lin, Tzu-Hsuan Lin, Ching-Heng Lin
Abstract:
Hospice care for terminally ill patients provides not only a better quality of life but also cost-saving benefits. However, the utilization of home-based hospice care (HBH care) remains low even for countries covered by National Health Insurance (NHI) programs in Taiwan. In the current commercial insurance policy, only hospital-based hospice benefits were covered. It may have an influence on the insureds chosen to receive end-of-life care in a hospitalized manner. Thus, how to propose a feasible method to advocate HBH care utilization rate of public health policies is an important issue. A total of 130,219 cancer decedents in the year 2011-2013 from the National Health Insurance Research Database (NHIRD) in Taiwan were included in this study. By adding a day volume pays benefits of HBH care as a commercial insurance rider, will provide alternative benefits for the insureds. A multiple-state Markov chain model was incorporated to estimate the transition intensities of patients in different states at the end of their lives (Non-hospice, HBH, hospital-based hospice), and the premiums were estimated. HBH care insurance benefits provide financial support and reduce the burden of care for patients. The rate-making of this product is very sensitive while the utilization rate is rising, especially for high ages. The proposed HBH care insurance is a feasible way to reduce the financial burden, enhance the care quality and family satisfaction of insureds. Meanwhile, insurance companies can participate in advocating a good medical policy to enhance the social image. In addition, the medical costs of NHI can reduce effectively.Keywords: home-based hospice care, commercial insurance, Markov chain model, the day volume pays
Procedia PDF Downloads 21617065 On the convergence of the Mixed Integer Randomized Pattern Search Algorithm
Authors: Ebert Brea
Abstract:
We propose a novel direct search algorithm for identifying at least a local minimum of mixed integer nonlinear unconstrained optimization problems. The Mixed Integer Randomized Pattern Search Algorithm (MIRPSA), so-called by the author, is based on a randomized pattern search, which is modified by the MIRPSA for finding at least a local minimum of our problem. The MIRPSA has two main operations over the randomized pattern search: moving operation and shrinking operation. Each operation is carried out by the algorithm when a set of conditions is held. The convergence properties of the MIRPSA is analyzed using a Markov chain approach, which is represented by an infinite countable set of state space λ, where each state d(q) is defined by a measure of the qth randomized pattern search Hq, for all q in N. According to the algorithm, when a moving operation is carried out on the qth randomized pattern search Hq, the MIRPSA holds its state. Meanwhile, if the MIRPSA carries out a shrinking operation over the qth randomized pattern search Hq, the algorithm will visit the next state, this is, a shrinking operation at the qth state causes a changing of the qth state into (q+1)th state. It is worthwhile pointing out that the MIRPSA never goes back to any visited states because the MIRPSA only visits any qth by shrinking operations. In this article, we describe the MIRPSA for mixed integer nonlinear unconstrained optimization problems for doing a deep study of its convergence properties using Markov chain viewpoint. We herein include a low dimension case for showing more details of the MIRPSA, when the algorithm is used for identifying the minimum of a mixed integer quadratic function. Besides, numerical examples are also shown in order to measure the performance of the MIRPSA.Keywords: direct search, mixed integer optimization, random search, convergence, Markov chain
Procedia PDF Downloads 47217064 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification
Authors: Abdelhadi Lotfi, Abdelkader Benyettou
Abstract:
In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.Keywords: classification, probabilistic neural networks, network optimization, pattern recognition
Procedia PDF Downloads 26517063 Modeling of Global Solar Radiation on a Horizontal Surface Using Artificial Neural Network: A Case Study
Authors: Laidi Maamar, Hanini Salah
Abstract:
The present work investigates the potential of artificial neural network (ANN) model to predict the horizontal global solar radiation (HGSR). The ANN is developed and optimized using three years meteorological database from 2011 to 2013 available at the meteorological station of Blida (Blida 1 university, Algeria, Latitude 36.5°, Longitude 2.81° and 163 m above mean sea level). Optimal configuration of the ANN model has been determined by minimizing the Root Means Square Error (RMSE) and maximizing the correlation coefficient (R2) between observed and predicted data with the ANN model. To select the best ANN architecture, we have conducted several tests by using different combinations of parameters. A two-layer ANN model with six hidden neurons has been found as an optimal topology with (RMSE=4.036 W/m²) and (R²=0.999). A graphical user interface (GUI), was designed based on the best network structure and training algorithm, to enhance the users’ friendliness application of the model.Keywords: artificial neural network, global solar radiation, solar energy, prediction, Algeria
Procedia PDF Downloads 49917062 A Generative Adversarial Framework for Bounding Confounded Causal Effects
Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning
Procedia PDF Downloads 19317061 Performance Evaluation of the Classic seq2seq Model versus a Proposed Semi-supervised Long Short-Term Memory Autoencoder for Time Series Data Forecasting
Authors: Aswathi Thrivikraman, S. Advaith
Abstract:
The study is aimed at designing encoders for deciphering intricacies in time series data by redescribing the dynamics operating on a lower-dimensional manifold. A semi-supervised LSTM autoencoder is devised and investigated to see if the latent representation of the time series data can better forecast the data. End-to-end training of the LSTM autoencoder, together with another LSTM network that is connected to the latent space, forces the hidden states of the encoder to represent the most meaningful latent variables relevant for forecasting. Furthermore, the study compares the predictions with those of a traditional seq2seq model.Keywords: LSTM, autoencoder, forecasting, seq2seq model
Procedia PDF Downloads 15617060 Understanding Children’s Visual Attention to Personal Protective Equipment Using Eye-Tracking
Authors: Vanessa Cho, Janet Hsiao, Nigel King, Robert Anthonappa
Abstract:
Background: The personal protective equipment (PPE) requirements for health care workers (HCWs) have changed significantly during the COVID-19 pandemic. Aim: To ascertain, using eye-tracking technology, what children notice the most when seeing HCWs in various PPE. Design: A Tobii nano pro-eye-tracking camera tracked 156 children's visual attention while they viewed photographs of HCWs in various PPEs. Eye Movement analysis with Hidden Markov Models (EMHMM) was employed to analyse 624 recordings using two approaches, namely (i) data-driven where children's fixation determined the regions of interest (ROIs), and (ii) fixed ROIs where the investigators predefined the ROIs. Results: Two significant eye movement patterns, namely distributed(85.2%) and selective(14.7%), were identified(P<0.05). Most children fixated primarily on the face regardless of the different PPEs. Children fixated equally on all PPE images in the distributed pattern, while a strong preference for unmasked faces was evident in the selective pattern (P<0.01). Conclusion: Children as young as 2.5 years used a top-down visual search behaviour and demonstrated their face processing ability. Most children did not show a strong visual preference for a specific PPE, while a minority preferred PPE with distinct facial features, namely without masks and loupes.Keywords: COVID-19, PPE, dentistry, pediatric
Procedia PDF Downloads 9117059 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems
Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos
Abstract:
Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system
Procedia PDF Downloads 18317058 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images
Authors: Mehrnoosh Omati, Mahmod Reza Sahebi
Abstract:
The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images
Procedia PDF Downloads 21917057 Reliability Analysis for the Functioning of Complete and Low Capacity MLDB Systems in Piston Plants
Authors: Ramanpreet Kaur, Upasana Sharma
Abstract:
The purpose of this paper is to address the challenges facing the water supply for the Machine Learning Database (MLDB) system at the piston foundry plant. In the MLDB system, one main unit, i.e., robotic, is connected by two sub-units. The functioning of the system depends on the robotic and water supply. Lack of water supply causes system failure. The system operates at full capacity with the help of two sub-units. If one sub-unit fails, the system runs at a low capacity. Reliability modeling is performed using semi-Markov processes and regenerative point techniques. Several system effects such as mean time to system failure, availability at full capacity, availability at reduced capacity, busy period for repair and expected number of visits have been achieved. Benefits have been analyzed. The graphical study is designed for a specific case using programming in C++ and MS Excel.Keywords: MLDB system, robotic, semi-Markov process, regenerative point technique
Procedia PDF Downloads 10317056 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.Keywords: regression, piecewise, Bayesian, reversible Jump MCMC
Procedia PDF Downloads 52117055 Implementation of an Associative Memory Using a Restricted Hopfield Network
Authors: Tet H. Yeap
Abstract:
An analog restricted Hopfield Network is presented in this paper. It consists of two layers of nodes, visible and hidden nodes, connected by directional weighted paths forming a bipartite graph with no intralayer connection. An energy or Lyapunov function was derived to show that the proposed network will converge to stable states. By introducing hidden nodes, the proposed network can be trained to store patterns and has increased memory capacity. Training to be an associative memory, simulation results show that the associative memory performs better than a classical Hopfield network by being able to perform better memory recall when the input is noisy.Keywords: restricted Hopfield network, Lyapunov function, simultaneous perturbation stochastic approximation
Procedia PDF Downloads 13417054 Analysis of Detection Concealed Objects Based on Multispectral and Hyperspectral Signatures
Authors: M. Kastek, M. Kowalski, M. Szustakowski, H. Polakowski, T. Sosnowski
Abstract:
Development of highly efficient security systems is one of the most urgent topics for science and engineering. There are many kinds of threats and many methods of prevention. It is very important to detect a threat as early as possible in order to neutralize it. One of the very challenging problems is detection of dangerous objects hidden under human’s clothing. This problem is particularly important for safety of airport passengers. In order to develop methods and algorithms to detect hidden objects it is necessary to determine the thermal signatures of such objects of interest. The laboratory measurements were conducted to determine the thermal signatures of dangerous tools hidden under various clothes in different ambient conditions. Cameras used for measurements were working in spectral range 0.6-12.5 μm An infrared imaging Fourier transform spectroradiometer was also used, working in spectral range 7.7-11.7 μm. Analysis of registered thermograms and hyperspectral datacubes has yielded the thermal signatures for two types of guns, two types of knives and home-made explosive bombs. The determined thermal signatures will be used in the development of method and algorithms of image analysis implemented in proposed monitoring systems.Keywords: hyperspectral detection, nultispectral detection, image processing, monitoring systems
Procedia PDF Downloads 34917053 Availability Analysis of Milling System in a Rice Milling Plant
Authors: P. C. Tewari, Parveen Kumar
Abstract:
The paper describes the availability analysis of milling system of a rice milling plant using probabilistic approach. The subsystems under study are special purpose machines. The availability analysis of the system is carried out to determine the effect of failure and repair rates of each subsystem on overall performance (i.e. steady state availability) of system concerned. Further, on the basis of effect of repair rates on the system availability, maintenance repair priorities have been suggested. The problem is formulated using Markov Birth-Death process taking exponential distribution for probable failures and repair rates. The first order differential equations associated with transition diagram are developed by using mnemonic rule. These equations are solved using normalizing conditions and recursive method to drive out the steady state availability expression of the system. The findings of the paper are presented and discussed with the plant personnel to adopt a suitable maintenance policy to increase the productivity of the rice milling plant.Keywords: availability modeling, Markov process, milling system, rice milling plant
Procedia PDF Downloads 23517052 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 46917051 A Bayesian Multivariate Microeconometric Model for Estimation of Price Elasticity of Demand
Authors: Jefferson Hernandez, Juan Padilla
Abstract:
Estimation of price elasticity of demand is a valuable tool for the task of price settling. Given its relevance, it is an active field for microeconomic and statistical research. Price elasticity in the industry of oil and gas, in particular for fuels sold in gas stations, has shown to be a challenging topic given the market and state restrictions, and underlying correlations structures between the types of fuels sold by the same gas station. This paper explores the Lotka-Volterra model for the problem for price elasticity estimation in the context of fuels; in addition, it is introduced multivariate random effects with the purpose of dealing with errors, e.g., measurement or missing data errors. In order to model the underlying correlation structures, the Inverse-Wishart, Hierarchical Half-t and LKJ distributions are studied. Here, the Bayesian paradigm through Markov Chain Monte Carlo (MCMC) algorithms for model estimation is considered. Simulation studies covering a wide range of situations were performed in order to evaluate parameter recovery for the proposed models and algorithms. Results revealed that the proposed algorithms recovered quite well all model parameters. Also, a real data set analysis was performed in order to illustrate the proposed approach.Keywords: price elasticity, volume, correlation structures, Bayesian models
Procedia PDF Downloads 16617050 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable
Authors: Xinyuan Y. Song, Kai Kang
Abstract:
Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data
Procedia PDF Downloads 145