Search results for: Bayesian hierarchical models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7167

Search results for: Bayesian hierarchical models

7077 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 307
7076 Bayesian Optimization for Reaction Parameter Tuning: An Exploratory Study of Parameter Optimization in Oxidative Desulfurization of Thiophene

Authors: Aman Sharma, Sonali Sengupta

Abstract:

The study explores the utility of Bayesian optimization in tuning the physical and chemical parameters of reactions in an offline experimental setup. A comparative analysis of the influence of the acquisition function on the optimization performance is also studied. For proxy first and second-order reactions, the results are indifferent to the acquisition function used, whereas, while studying the parameters for oxidative desulphurization of thiophene in an offline setup, upper confidence bound (UCB) provides faster convergence along with a marginal trade-off in the maximum conversion achieved. The work also demarcates the critical number of independent parameters and input observations required for both sequential and offline reaction setups to yield tangible results.

Keywords: acquisition function, Bayesian optimization, desulfurization, kinetics, thiophene

Procedia PDF Downloads 159
7075 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 457
7074 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 121
7073 A Model for Solid Transportation Problem with Three Hierarchical Objectives under Uncertain Environment

Authors: Wajahat Ali, Shakeel Javaid

Abstract:

In this study, we have developed a mathematical programming model for a solid transportation problem with three objective functions arranged in hierarchical order. The mathematical programming models with more than one objective function to be solved in hierarchical order is termed as a multi-level programming model. Our study explores a Multi-Level Solid Transportation Problem with Uncertain Parameters (MLSTPWU). The proposed MLSTPWU model consists of three objective functions, viz. minimization of transportation cost, minimization of total transportation time, and minimization of deterioration during transportation. These three objective functions are supposed to be solved by decision-makers at three consecutive levels. Three constraint functions are added to the model, restricting the total availability, total demand, and capacity of modes of transportation. All the parameters involved in the model are assumed to be uncertain in nature. A solution method based on fuzzy logic is also discussed to obtain the compromise solution for the proposed model. Further, a simulated numerical example is discussed to establish the efficiency and applicability of the proposed model.

Keywords: solid transportation problem, multi-level programming, uncertain variable, uncertain environment

Procedia PDF Downloads 62
7072 Facile Hydrothermal Synthesis of Hierarchical NiO/ZnCo₂O₄ Nanocomposite for High-Energy Supercapacitor Applications

Authors: Fayssal Ynineb, Toufik Hadjersi, Fatsah Moulai, Wafa Achour

Abstract:

Currently, tremendous attention has been paid to the rational design and synthesis of core/shell heterostructures for high-performance supercapacitors. In this study, the hierarchical NiO/ZnCo₂O₄ Core-Shell Nanorods Arrays were successfully deposited onto ITO substrate via a two-step hydrothermal and electrodeposition methods. The effect of the thin carbon layer between NiO and ZnCo₂O₄ in this multi-scale hierarchical structure was investigated. The selection of this structure was based on: (i) a high specific area of pseudo-capacitive NiO to maximize specific capacitance; (ii) an effective NiO-electrolyte interface to facilitate fast charging/discharging; and (iii) conducting carbon layer between ZnCo₂O₄ and NiO enhance the electric conductivity which reduces energy loss, and the corrosion protection of ZnCo₂O₄ in alkaline electrolyte. The obtained results indicate that hierarchical NiO/ZnCo₂O₄ present a high specific capacitance of 63 mF.cm⁻² at a current density of 0.05 mA.cm⁻² higher than that of pristine NiO and ZnCo₂O₄ of 6 and 3 mF.cm⁻², respectively. The carbon layer improves the electrical conductivity among NiO and ZnCo₂O₄ in the hierarchical NiO/C/ZnCo₂O₄ electrode. As well, the specific capacitance drastically increased to reach 125 mF.cm⁻². Moreover, this multi-scale hierarchical structure exhibits superior cycling stability with ~ 95.7 % capacitance retention after 65k cycles. These results indicate that the NiO/C/ZnCo₂O₄ nanocomposite material is an outstanding electrode material for supercapacitors.

Keywords: NiO/C/ZnCo₂O₄, specific capacitance, hydrothermal, supercapacitors

Procedia PDF Downloads 79
7071 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 307
7070 Diagonal Vector Autoregressive Models and Their Properties

Authors: Usoro Anthony E., Udoh Emediong

Abstract:

Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.

Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations

Procedia PDF Downloads 90
7069 Probabilistic Approach to Contrast Theoretical Predictions from a Public Corruption Game Using Bayesian Networks

Authors: Jaime E. Fernandez, Pablo J. Valverde

Abstract:

This paper presents a methodological approach that aims to contrast/validate theoretical results from a corruption network game through probabilistic analysis of simulated microdata using Bayesian Networks (BNs). The research develops a public corruption model in a game theory framework. Theoretical results suggest a series of 'optimal settings' of model's exogenous parameters that boost the emergence of corruption. The paper contrasts these outcomes with probabilistic inference results based on BNs adjusted over simulated microdata. Principal findings indicate that probabilistic reasoning based on BNs significantly improves parameter specification and causal analysis in a public corruption game.

Keywords: Bayesian networks, probabilistic reasoning, public corruption, theoretical games

Procedia PDF Downloads 185
7068 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 73
7067 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 395
7066 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 290
7065 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study

Authors: Colin Smith, Linsey S Passarella

Abstract:

Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.

Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy

Procedia PDF Downloads 110
7064 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 25
7063 Networked Implementation of Milling Stability Optimization with Bayesian Learning

Authors: Christoph Ramsauer, Jaydeep Karandikar, Tony Schmitz, Friedrich Bleicher

Abstract:

Machining stability is an important limitation to discrete part machining. In this work, a networked implementation of milling stability optimization with Bayesian learning is presented. The milling process was monitored with a wireless sensory tool holder instrumented with an accelerometer at the Vienna University of Technology, Vienna, Austria. The recorded data from a milling test cut is used to classify the cut as stable or unstable based on the frequency analysis. The test cut result is fed to a Bayesian stability learning algorithm at the University of Tennessee, Knoxville, Tennessee, USA. The algorithm calculates the probability of stability as a function of axial depth of cut and spindle speed and recommends the parameters for the next test cut. The iterative process between two transatlantic locations repeats until convergence to a stable optimal process parameter set is achieved.

Keywords: machining stability, machine learning, sensor, optimization

Procedia PDF Downloads 186
7062 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 47
7061 Cluster Analysis of Retailers’ Benefits from Their Cooperation with Manufacturers: Business Models Perspective

Authors: M. K. Witek-Hajduk, T. M. Napiórkowski

Abstract:

A number of studies discussed the topic of benefits of retailers-manufacturers cooperation and coopetition. However, there are only few publications focused on the benefits of cooperation and coopetition between retailers and their suppliers of durable consumer goods; especially in the context of business model of cooperating partners. This paper aims to provide a clustering approach to segment retailers selling consumer durables according to the benefits they obtain from their cooperation with key manufacturers and differentiate the said retailers’ in term of the business models of cooperating partners. For the purpose of the study, a survey (with a CATI method) collected data on 603 consumer durables retailers present on the Polish market. Retailers are clustered both, with hierarchical and non-hierarchical methods. Five distinctive groups of consumer durables’ retailers are (based on the studied benefits) identified using the two-stage clustering approach. The clusters are then characterized with a set of exogenous variables, key of which are business models employed by the retailer and its partnering key manufacturer. The paper finds that the a combination of a medium sized retailer classified as an Integrator with a chiefly domestic capital and a manufacturer categorized as a Market Player will yield the highest benefits. On the other side of the spectrum is medium sized Distributor retailer with solely domestic capital – in this case, the business model of the cooperating manufactrer appears to be irreleveant. This paper is the one of the first empirical study using cluster analysis on primary data that defines the types of cooperation between consumer durables’ retailers and manufacturers – their key suppliers. The analysis integrates a perspective of both retailers’ and manufacturers’ business models and matches them with individual and joint benefits.

Keywords: benefits of cooperation, business model, cluster analysis, retailer-manufacturer cooperation

Procedia PDF Downloads 238
7060 Effects of Hierarchy on Poisson’s Ratio and Phononic Bandgaps of Two-Dimensional Honeycomb Structures

Authors: Davood Mousanezhad, Ashkan Vaziri

Abstract:

As a traditional cellular structure, hexagonal honeycombs are known for their high strength-to-weight ratio. Here, we introduce a class of fractal-appearing hierarchical metamaterials by replacing the vertices of the original non-hierarchical hexagonal grid with smaller hexagons and iterating this process to achieve higher levels of hierarchy. It has been recently shown that the isotropic in-plane Young's modulus of this hierarchical structure at small deformations becomes 25 times greater than its regular counterpart with the same mass. At large deformations, we find that hierarchy-dependent elastic buckling introduced at relatively early stages of deformation decreases the value of Poisson's ratio as the structure is compressed uniaxially leading to auxeticity (i.e., negative Poisson's ratio) in subsequent stages of deformation. We also show that the topological hierarchical architecture and instability-induced pattern transformations of the structure under compression can be effectively used to tune the propagation of elastic waves within the structure. We find that the hierarchy tends to shift the existing phononic bandgaps (defined as frequency ranges of strong wave attenuation) to lower frequencies while opening up new bandgaps. Deformation is also demonstrated as another mechanism for opening more bandgaps in hierarchical structures. The results provide new insights into the role of structural organization and hierarchy in regulating mechanical properties of materials at both the static and dynamic regimes.

Keywords: cellular structures, honeycombs, hierarchical structures, metamaterials, multifunctional structures, phononic crystals, auxetic structures

Procedia PDF Downloads 328
7059 A Probabilistic View of the Spatial Pooler in Hierarchical Temporal Memory

Authors: Mackenzie Leake, Liyu Xia, Kamil Rocki, Wayne Imaino

Abstract:

In the Hierarchical Temporal Memory (HTM) paradigm the effect of overlap between inputs on the activation of columns in the spatial pooler is studied. Numerical results suggest that similar inputs are represented by similar sets of columns and dissimilar inputs are represented by dissimilar sets of columns. It is shown that the spatial pooler produces these results under certain conditions for the connectivity and proximal thresholds. Following the discussion of the initialization of parameters for the thresholds, corresponding qualitative arguments about the learning dynamics of the spatial pooler are discussed.

Keywords: hierarchical temporal memory, HTM, learning algorithms, machine learning, spatial pooler

Procedia PDF Downloads 319
7058 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study

Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta

Abstract:

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time

Procedia PDF Downloads 9
7057 Hierarchical Tree Long Short-Term Memory for Sentence Representations

Authors: Xiuying Wang, Changliang Li, Bo Xu

Abstract:

A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.

Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis

Procedia PDF Downloads 334
7056 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).

Keywords: chemometrics, chromatography, pesticides, sum of ranking differences

Procedia PDF Downloads 356
7055 Analysis of Risks of Adopting Integrated Project Delivery: Application of Bayesian Theory

Authors: Shan Li, Qiuwen Ma

Abstract:

Integrated project delivery (IPD) is a project delivery method distinguished by a shared risk/rewards mechanism and multiparty agreement. IPD has drawn increasing attention from construction industry due to its reliability to deliver high-performing buildings. However, unavailable IPD specific insurance concerns the industry participants who are interested in IPD implementation. Even though the risk management capability can be enhanced using shared risk mechanism, some risks may occur when the partners do not commit themselves into the integrated practices in a desired manner. This is because the intense collaboration and close integration can not only create added value but bring new opportunistic behaviors and disputes. The study is aimed to investigate the risks of implementing IPD using Bayesian theory. IPD risk taxonomy is presented to identify all potential risks of implementing IPD and a risk network map is developed to capture the interdependencies between IPD risks. The conditional relations between risk occurrences and the impacts of IPD risks on project performances are evaluated and simulated based on Bayesian theory. The probability of project outcomes is predicted by simulation. In addition, it is found that some risks caused by integration are most possible occurred risks. This study can help the IPD project participants identify critical risks of adopting IPD to improve project performances. In addition, it is helpful to develop IPD specific insurance when the pertinent risks can be identified.

Keywords: Bayesian theory, integrated project delivery, project risks, project performances

Procedia PDF Downloads 278
7054 Cooperative CDD Scheme Based On Hierarchical Modulation in OFDM System

Authors: Seung-Jun Yu, Yeong-Seop Ahn, Young-Min Ko, Hyoung-Kyu Song

Abstract:

In order to achieve high data rate and increase the spectral efficiency, multiple input multiple output (MIMO) system has been proposed. However, multiple antennas are limited by size and cost. Therefore, recently developed cooperative diversity scheme, which profits the transmit diversity only with the existing hardware by constituting a virtual antenna array, can be a solution. However, most of the introduced cooperative techniques have a common fault of decreased transmission rate because the destination should receive the decodable compositions of symbols from the source and the relay. In this paper, we propose a cooperative cyclic delay diversity (CDD) scheme that uses hierarchical modulation. This scheme is free from the rate loss and allows seamless cooperative communication.

Keywords: MIMO, cooperative communication, CDD, hierarchical modulation

Procedia PDF Downloads 525
7053 The Problems of Current Earth Coordinate System for Earthquake Forecasting Using Single Layer Hierarchical Graph Neuron

Authors: Benny Benyamin Nasution, Rahmat Widia Sembiring, Abdul Rahman Dalimunthe, Nursiah Mustari, Nisfan Bahri, Berta br Ginting, Riadil Akhir Lubis, Rita Tavip Megawati, Indri Dithisari

Abstract:

The earth coordinate system is an important part of an attempt for earthquake forecasting, such as the one using Single Layer Hierarchical Graph Neuron (SLHGN). However, there are a number of problems that need to be worked out before the coordinate system can be utilized for the forecaster. One example of those is that SLHGN requires that the focused area of an earthquake must be constructed in a grid-like form. In fact, within the current earth coordinate system, the same longitude-difference would produce different distances. This can be observed at the distance on the Equator compared to distance at both poles. To deal with such a problem, a coordinate system has been developed, so that it can be used to support the ongoing earthquake forecasting using SLHGN. Two important issues have been developed in this system: 1) each location is not represented through two-value (longitude and latitude), but only a single value, 2) the conversion of the earth coordinate system to the x-y cartesian system requires no angular formulas, which is therefore fast. The accuracy and the performance have not been measured yet, since earthquake data is difficult to obtain. However, the characteristics of the SLHGN results show a very promising answer.

Keywords: hierarchical graph neuron, multidimensional hierarchical graph neuron, single layer hierarchical graph neuron, natural disaster forecasting, earthquake forecasting, earth coordinate system

Procedia PDF Downloads 197
7052 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem

Authors: Kalpana Dahiya

Abstract:

This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.

Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization

Procedia PDF Downloads 139
7051 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 289
7050 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements

Authors: Sabiu Bala Muhammad, Rosli Saad

Abstract:

Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.

Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity

Procedia PDF Downloads 252
7049 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 505
7048 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 307