Search results for: gaussian%20mixture%20model%20%28GMM%29
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 315

Search results for: gaussian%20mixture%20model%20%28GMM%29

225 Powder Flow with Normalized Powder Particles Size Distribution and Temperature Analyses in Laser Melting Deposition: Analytical Modelling and Experimental Validation

Authors: Muhammad Arif Mahmood, Andrei C. Popescu, Mihai Oane, Diana Chioibascu, Carmen Ristoscu, Ion N. Mihailescu

Abstract:

Powder flow and temperature distributions are recognized as influencing factors during laser melting deposition (LMD) process, that not only affect the consolidation rate but also characteristics of the deposited layers. Herewith, two simplified analytical models will be presented to simulate the powder flow with the inclusion of powder particles size distribution in Gaussian form, under three powder jet nozzles, and temperature analyses during LMD process. The output of the 1st model will serve as the input in the 2nd model. The models will be validated with experimental data, i.e., weight measurement method for powder particles distribution and infrared imaging for temperature analyses. This study will increase the cost-efficiency of the LMD process by adjustment of the operating parameters for reaching optimal powder debit and energy. This research has received funds under the Marie Sklodowska-Curie grant agreement No. 764935, from the European Union’s Horizon 2020 research and innovation program.

Keywords: laser additive manufacturing, powder particles size distribution in Gaussian form, powder stream distribution, temperature analyses

Procedia PDF Downloads 132
224 Hybrid Algorithm for Non-Negative Matrix Factorization Based on Symmetric Kullback-Leibler Divergence for Signal Dependent Noise: A Case Study

Authors: Ana Serafimovic, Karthik Devarajan

Abstract:

Non-negative matrix factorization approximates a high dimensional non-negative matrix V as the product of two non-negative matrices, W and H, and allows only additive linear combinations of data, enabling it to learn parts with representations in reality. It has been successfully applied in the analysis and interpretation of high dimensional data arising in neuroscience, computational biology, and natural language processing, to name a few. The objective of this paper is to assess a hybrid algorithm for non-negative matrix factorization with multiplicative updates. The method aims to minimize the symmetric version of Kullback-Leibler divergence known as intrinsic information and assumes that the noise is signal-dependent and that it originates from an arbitrary distribution from the exponential family. It is a generalization of currently available algorithms for Gaussian, Poisson, gamma and inverse Gaussian noise. We demonstrate the potential usefulness of the new generalized algorithm by comparing its performance to the baseline methods which also aim to minimize symmetric divergence measures.

Keywords: non-negative matrix factorization, dimension reduction, clustering, intrinsic information, symmetric information divergence, signal-dependent noise, exponential family, generalized Kullback-Leibler divergence, dual divergence

Procedia PDF Downloads 244
223 A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius

Authors: Mina Adel Shokry Fahim, Jūratė Sužiedelytė Visockienė

Abstract:

With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realisation often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing.

Keywords: air pollution, anthropogenic and natural sources, machine learning, Gaussian process regression, tree ensemble, forecasting models, particulate matter

Procedia PDF Downloads 51
222 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features

Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari

Abstract:

An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.

Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)

Procedia PDF Downloads 445
221 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs

Authors: Malo Pocheau-Lesteven, Olivier Le Maître

Abstract:

Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.

Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program

Procedia PDF Downloads 156
220 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon

Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi

Abstract:

The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.

Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation

Procedia PDF Downloads 160
219 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam

Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen

Abstract:

In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.

Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks

Procedia PDF Downloads 208
218 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 257
217 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 105
216 Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations

Authors: Siu-Siu Guo, Qingxuan Shi

Abstract:

In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.

Keywords: filtered noise, narrow-banded noise, nonlinear dynamic, random vibration

Procedia PDF Downloads 224
215 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis

Authors: Deng Zengming, Wang Mingjiang

Abstract:

As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.

Keywords: fusion method, Gaussian mixture model, hybrid framework, view synthesis

Procedia PDF Downloads 248
214 Quasistationary States and Mean Field Model

Authors: Sergio Curilef, Boris Atenas

Abstract:

Systems with long-range interactions are very common in nature. They are observed from the atomic scale to the astronomical scale and exhibit anomalies, such as inequivalence of ensembles, negative heat capacity, ergodicity breaking, nonequilibrium phase transitions, quasistationary states, and anomalous diffusion. These anomalies are exacerbated when special initial conditions are imposed; in particular, we use the so-called water bag initial conditions that stand for a uniform distribution. Several theoretical and practical implications are discussed here. A potential energy inspired by dipole-dipole interactions is proposed to build the dipole-type Hamiltonian mean-field model. As expected, the dynamics is novel and general to the behavior of systems with long-range interactions, which is obtained through molecular dynamics technique. Two plateaus sequentially emerge before arriving at equilibrium, which are corresponding to two different quasistationary states. The first plateau is a type of quasistationary state the lifetime of which depends on a power law of N and the second plateau seems to be a true quasistationary state as reported in the literature. The general behavior of the model according to its dynamics and thermodynamics is described. Using numerical simulation we characterize the mean kinetic energy, caloric curve, and the diffusion law through the mean square of displacement. The present challenge is to characterize the distributions in phase space. Certainly, the equilibrium state is well characterized by the Gaussian distribution, but quasistationary states in general depart from any Gaussian function.

Keywords: dipole-type interactions, dynamics and thermodynamics, mean field model, quasistationary states

Procedia PDF Downloads 209
213 Reliability Modeling on Drivers’ Decision during Yellow Phase

Authors: Sabyasachi Biswas, Indrajit Ghosh

Abstract:

The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.

Keywords: decision-making decision, dilemma zone, surrogate model, Kriging

Procedia PDF Downloads 308
212 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 86
211 Assessment of Air Pollutant Dispersion and Soil Contamination: The Critical Role of MATLAB Modeling in Evaluating Emissions from the Covanta Municipal Solid Waste Incineration Facility

Authors: Jadon Matthiasa, Cindy Donga, Ali Al Jibouria, Hsin Kuo

Abstract:

The environmental impact of emissions from the Covanta Waste-to-Energy facility in Burnaby, BC, was comprehensively evaluated, focusing on the dispersion of air pollutants and the subsequent assessment of heavy metal contamination in surrounding soils. A Gaussian Plume Model, implemented in MATLAB, was utilized to simulate the dispersion of key pollutants to understand their atmospheric behaviour and potential deposition patterns. The MATLAB code developed for this study enhanced the accuracy of pollutant concentration predictions and provided capabilities for visualizing pollutant dispersion in 3D plots. Furthermore, the code could predict the maximum concentration of pollutants at ground level, eliminating the need to use the Ranchoux model for predictions. Complementing the modelling approach, empirical soil sampling and analysis were conducted to evaluate heavy metal concentrations in the vicinity of the facility. This integrated methodology underscored the importance of computational modelling in air pollution assessment and highlighted the necessity of soil analysis to obtain a holistic understanding of environmental impacts. The findings emphasized the effectiveness of current emissions controls while advocating for ongoing monitoring to safeguard public health and environmental integrity.

Keywords: air emissions, Gaussian Plume Model, MATLAB, soil contamination, air pollution monitoring, waste-to-energy, pollutant dispersion visualization, heavy metal analysis, environmental impact assessment, emission control effectiveness

Procedia PDF Downloads 16
210 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System

Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu

Abstract:

Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.

Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance

Procedia PDF Downloads 475
209 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 387
208 Double Gaussian Distribution of Nonhomogeneous Barrier Height in Metal/n-type GaN Schottky Contacts

Authors: M. Mamor

Abstract:

GaN-based compounds have attracted much interest in the fabrication of high-power, high speed and high-frequency electronic devices. Other examples of GaN-based applications are blue and ultraviolet (UV) light-emitting diodes (LEDs). All these devices require high-quality ohmic and Schottky contacts. Gaining an understanding of the electrical characteristics of metal/GaN contacts is of fundamental and technological importance for developing GaN-based devices. In this work, the barrier characteristics of Pt and Pd Schottky contacts on n-type GaN were studied using temperature-dependent forward current-voltage (I-V) measurements over a wide temperature range 80–400 K. Our results show that the barrier height and ideality factor, extracted from the forward I-V characteristics based on thermionic emission (TE) model, exhibit an abnormal dependence with temperature; i.e., by increasing temperature, the barrier height increases whereas the ideality factor decreases. This abnormal behavior has been explained based on the TE model by considering the presence of double Gaussian distribution (GD) of nonhomogeneous barrier height at the metal/GaN interface. However, in the high-temperature range (160-400 K), the extracted value for the effective Richardson constant A* based on the barrier inhomogeneity (BHi) model is found in fair agreement with the theoretically predicted value of about 26.9 A.cm-2 K-2 for n-type GaN. This result indicates that in this temperature range, the conduction current transport is dominated by the thermionic emission mode. On the other hand, in the lower temperature range (80-160 K), the corresponding effective Richardson constant value according to the BHi model is lower than the theoretical value, suggesting the presence of other current transport, such as tunneling-assisted mode at lower temperatures.

Keywords: Schottky diodes, inhomogeneous barrier height, GaN semiconductors, Schottky barrier heights

Procedia PDF Downloads 54
207 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 96
206 Design Components and Reliability Aspects of Municipal Waste Water and SEIG Based Micro Hydro Power Plant

Authors: R. K. Saket

Abstract:

This paper presents design aspects and probabilistic approach for generation reliability evaluation of an alternative resource: municipal waste water based micro hydro power generation system. Annual and daily flow duration curves have been obtained for design, installation, development, scientific analysis and reliability evaluation of the MHPP. The hydro potential of the waste water flowing through sewage system of the BHU campus has been determined to produce annual flow duration and daily flow duration curves by ordering the recorded water flows from maximum to minimum values. Design pressure, the roughness of the pipe’s interior surface, method of joining, weight, ease of installation, accessibility to the sewage system, design life, maintenance, weather conditions, availability of material, related cost and likelihood of structural damage have been considered for design of a particular penstock for reliable operation of the MHPP. A MHPGS based on MWW and SEIG is designed, developed, and practically implemented to provide reliable electric energy to suitable load in the campus of the Banaras Hindu University, Varanasi, (UP), India. Generation reliability evaluation of the developed MHPP using Gaussian distribution approach, safety factor concept, peak load consideration and Simpson 1/3rd rule has presented in this paper.

Keywords: self excited induction generator, annual and daily flow duration curve, sewage system, municipal waste water, reliability evaluation, Gaussian distribution, Simpson 1/3rd rule

Procedia PDF Downloads 557
205 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 324
204 Flat-Top Apodization of Laser Beams by Means of Acousto-Optics

Authors: Sergey I. Chizhikov, Vladimir Y. Molchanov, Konstantin B. Yushkov

Abstract:

We demonstrate a method for adaptive spatial shaping of laser beams by means of acousto-optic Bragg diffraction. Transformation of the angular spectrum during Bragg diffraction is used to convert Gaussian intensity distribution into a flat-top one. Theoretical model is supported by the experiment.

Keywords: acousto-optics, flat top, beam shaping, Bragg diffraction

Procedia PDF Downloads 624
203 Mixed Sub-Fractional Brownian Motion

Authors: Mounir Zili

Abstract:

We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-Markovian and that it has non-stationary increments. We will also give the conditions under which it is a semimartingale. Finally, the main features of its sample paths will be specified.

Keywords: mixed Gaussian processes, Sub-fractional Brownian motion, sample paths

Procedia PDF Downloads 485
202 Mixed-Sub Fractional Brownian Motion

Authors: Mounir Zili

Abstract:

We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-markovian and that it has non-stationary increments. We will also give the conditions under which it is a semi-martingale. Finally, the main features of its sample paths will be specified.

Keywords: fractal dimensions, mixed gaussian processes, sample paths, sub-fractional brownian motion

Procedia PDF Downloads 418
201 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis

Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn

Abstract:

Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.

Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics

Procedia PDF Downloads 158
200 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 320
199 Performance Comparison of Non-Binary RA and QC-LDPC Codes

Authors: Ni Wenli, He Jing

Abstract:

Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.

Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm

Procedia PDF Downloads 374
198 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 160
197 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 63
196 BIASS in the Estimation of Covariance Matrices and Optimality Criteria

Authors: Juan M. Rodriguez-Diaz

Abstract:

The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.

Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix

Procedia PDF Downloads 441