Search results for: Gaussian process priors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5628

Search results for: Gaussian process priors

5538 Applications of Stable Distributions in Time Series Analysis, Computer Sciences and Financial Markets

Authors: Mohammad Ali Baradaran Ghahfarokhi, Parvin Baradaran Ghahfarokhi

Abstract:

In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.

Keywords: stable distribution, SaS, infinite variance, heavy tail networks, VaR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
5537 The Extension of Monomeric Computational Results to Polymeric Measurable Properties: An Introductory Computational Chemistry Experiment

Authors: Zhao Jing, Bai Yongqing, Shi Qiaofang, Zang Yang, Zhang Huaihao

Abstract:

Advances in software technology enable the computational chemistry to be commonly applied in various research fields, especially in pedagogy. Thus, in order to expand and improve experimental instructions of computational chemistry for undergraduates, we designed an introductory experiment—research on acrylamide molecular structure and physicochemical properties. Initially, students construct molecular models of acrylamide and polyacrylamide in Gaussian and Materials Studio software respectively. Then, the infrared spectral data, atomic charge and molecular orbitals of acrylamide as well as solvation effect of polyacrylamide are calculated to predict their physicochemical performance. At last, rheological experiments are used to validate these predictions. Through the combination of molecular simulation (performed on Gaussian, Materials Studio) with experimental verification (rheology experiment), learners have deeply comprehended the chemical nature of acrylamide and polyacrylamide, achieving good learning outcomes.

Keywords: Upper-division undergraduate, computer-based learning, laboratory instruction, amides, molecular modeling, spectroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 301
5536 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method

Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri

Abstract:

Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.

Keywords: Unsharp masking, blur image, sub-region gradient, image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
5535 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity, and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
5534 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image

Authors: Yohei Saika, Yuji Haraguchi

Abstract:

We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.

Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1949
5533 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft

Authors: A. Maddi, A. Guessoum, D. Berkani

Abstract:

The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.

Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
5532 Optimal Control Strategies for Speed Control of Permanent-Magnet Synchronous Motor Drives

Authors: Roozbeh Molavi, Davood A. Khaburi

Abstract:

The permanent magnet synchronous motor (PMSM) is very useful in many applications. Vector control of PMSM is popular kind of its control. In this paper, at first an optimal vector control for PMSM is designed and then results are compared with conventional vector control. Then, it is assumed that the measurements are noisy and linear quadratic Gaussian (LQG) methodology is used to filter the noises. The results of noisy optimal vector control and filtered optimal vector control are compared to each other. Nonlinearity of PMSM and existence of inverter in its control circuit caused that the system is nonlinear and time-variant. With deriving average model, the system is changed to nonlinear time-invariant and then the nonlinear system is converted to linear system by linearization of model around average values. This model is used to optimize vector control then two optimal vector controls are compared to each other. Simulation results show that the performance and robustness to noise of the control system has been highly improved.

Keywords: Kalman filter, Linear quadratic Gaussian (LQG), Linear quadratic regulator (LQR), Permanent-Magnet synchronousmotor (PMSM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2966
5531 Cash Flow Optimization on Synthetic CDOs

Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet

Abstract:

Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.

Keywords: Synthetic Collateralized Debt Obligation (CDO), Credit Default Swap (CDS), Cash Flow Optimization, Probability of Default, Default Correlation, Strategies, Simulation, Simplex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
5530 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images

Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin

Abstract:

Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.

Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
5529 Blind Identification of MA Models Using Cumulants

Authors: Mohamed Boulouird, Moha M'Rabet Hassani

Abstract:

In this paper, many techniques for blind identification of moving average (MA) process are presented. These methods utilize third- and fourth-order cumulants of the noisy observations of the system output. The system is driven by an independent and identically distributed (i.i.d) non-Gaussian sequence that is not observed. Two nonlinear optimization algorithms, namely the Gradient Descent and the Gauss-Newton algorithms are exposed. An algorithm based on the joint-diagonalization of the fourth-order cumulant matrices (FOSI) is also considered, as well as an improved version of the classical C(q, 0, k) algorithm based on the choice of the Best 1-D Slice of fourth-order cumulants. To illustrate the effectiveness of our methods, various simulation examples are presented.

Keywords: Cumulants, Identification, MA models, Parameter estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
5528 Object-Centric Process Mining Using Process Cubes

Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst

Abstract:

Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.

Keywords: Process mining, multidimensional process mining, multi-perspective business processes, OLAP, process cubes, process discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
5527 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis

Authors: V. Venkatachalam, S. Selvan

Abstract:

The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.

Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
5526 Generating Normally Distributed Clusters by Means of a Self-organizing Growing Neural Network– An Application to Market Segmentation –

Authors: Reinhold Decker, Christian Holsing, Sascha Lerke

Abstract:

This paper presents a new growing neural network for cluster analysis and market segmentation, which optimizes the size and structure of clusters by iteratively checking them for multivariate normality. We combine the recently published SGNN approach [8] with the basic principle underlying the Gaussian-means algorithm [13] and the Mardia test for multivariate normality [18, 19]. The new approach distinguishes from existing ones by its holistic design and its great autonomy regarding the clustering process as a whole. Its performance is demonstrated by means of synthetic 2D data and by real lifestyle survey data usable for market segmentation.

Keywords: Artificial neural network, clustering, multivariatenormality, market segmentation, self-organization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1159
5525 Modelling of Electron States in Quantum -Wire Systems - Influence of Stochastic Effects on the Confining Potential

Authors: Mikhail Vladimirovich Deryabin, Morten Willatzen

Abstract:

In this work, we address theoretically the influence of red and white Gaussian noise for electronic energies and eigenstates of cylindrically shaped quantum dots. The stochastic effect can be imagined as resulting from crystal-growth statistical fluctuations in the quantum-dot material composition. In particular we obtain analytical expressions for the eigenvalue shifts and electronic envelope functions in the k . p formalism due to stochastic variations in the confining band-edge potential. It is shown that white noise in the band-edge potential leaves electronic properties almost unaffected while red noise may lead to changes in state energies and envelopefunction amplitudes of several percentages. In the latter case, the ensemble-averaged envelope function decays as a function of distance. It is also shown that, in a stochastic system, constant ensembleaveraged envelope functions are the only bounded solutions for the infinite quantum-wire problem and the energy spectrum is completely discrete. In other words, the infinite stochastic quantum wire behaves, ensemble-averaged, as an atom.

Keywords: cylindrical quantum dots, electronic eigen energies, red and white Gaussian noise, ensemble averaging effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
5524 Simulation of PM10 Source Apportionment at An Urban Site in Southern Taiwan by a Gaussian Trajectory Model

Authors: Chien-Lung Chen, Jeng-Lin Tsai, Feng-Chao Chung, Su-Ching Kuo, Kuo-Hsin Tseng, Pei-Hsuan Kuo, Li-Ying Hsieh, Ying I. Tsai

Abstract:

This study applied the Gaussian trajectory transfer-coefficient model (GTx) to simulate the particulate matter concentrations and the source apportionments at Nanzih Air Quality Monitoring Station in southern Taiwan from November 2007 to February 2008. The correlation coefficient between the observed and the calculated daily PM10 concentrations is 0.5 and the absolute bias of the PM10 concentrations is 24%. The simulated PM10 concentrations matched well with the observed data. Although the emission rate of PM10 was dominated by area sources (58%), the results of source apportionments indicated that the primary sources for PM10 at Nanzih Station were point sources (42%), area sources (20%) and then upwind boundary concentration (14%). The obvious difference of PM10 source apportionment between episode and non-episode days was upwind boundary concentrations which contributed to 20% and 11% PM10 sources, respectively. The gas-particle conversion of secondary aerosol and long range transport played crucial roles on the PM10 contribution to a receptor.

Keywords: back trajectory model, particulate matter, sourceapportionment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
5523 Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series

Authors: Mohammad H. Fattahi

Abstract:

Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. Noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.

Keywords: Chaotic behavior, wavelet, noise reduction, river flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
5522 Adaptive Square-Rooting Companding Technique for PAPR Reduction in OFDM Systems

Authors: Wisam F. Al-Azzo, Borhanuddin Mohd. Ali

Abstract:

This paper addresses the problem of peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. It also introduces a new PAPR reduction technique based on adaptive square-rooting (SQRT) companding process. The SQRT process of the proposed technique changes the statistical characteristics of the OFDM output signals from Rayleigh distribution to Gaussian-like distribution. This change in statistical distribution results changes of both the peak and average power values of OFDM signals, and consequently reduces significantly the PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6 dB reduction in PAPR was achieved by square-rooting technique with fixed degradation in bit error rate (BER) equal to 3 dB. However, the PAPR is reduced at the expense of only -15 dB out-ofband spectral shoulder re-growth below the in-band signal level. The proposed adaptive SQRT technique is superior in terms of BER performance than the original, non-adaptive, square-rooting technique when the required reduction in PAPR is no more than 5 dB. Also, it provides fixed amount of PAPR reduction in which it is not available in the original SQRT technique.

Keywords: complementary cumulative distribution function(CCDF), OFDM, peak-to-average power ratio (PAPR), adaptivesquare-rooting PAPR reduction technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2156
5521 On Bayesian Analysis of Failure Rate under Topp Leone Distribution using Complete and Censored Samples

Authors: N. Feroze, M. Aslam

Abstract:

The article is concerned with analysis of failure rate (shape parameter) under the Topp Leone distribution using a Bayesian framework. Different loss functions and a couple of noninformative priors have been assumed for posterior estimation. The posterior predictive distributions have also been derived. A simulation study has been carried to compare the performance of different estimators. A real life example has been used to illustrate the applicability of the results obtained. The findings of the study suggest  that the precautionary loss function based on Jeffreys prior and singly type II censored samples can effectively be employed to obtain the Bayes estimate of the failure rate under Topp Leone distribution.

Keywords: loss functions, type II censoring, posterior distribution, Bayes estimators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
5520 A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection

Authors: John N. Ellinas

Abstract:

In this paper, a robust watermarking algorithm using the wavelet transform and edge detection is presented. The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with the maximum possible strength. The watermark embedding process is carried over the subband coefficients that lie on edges, where distortions are less noticeable, with a subband level dependent strength. Also, the watermark is embedded to selected coefficients around edges, using a different scale factor for watermark strength, that are captured by a morphological dilation operation. The experimental evaluation of the proposed method shows very good results in terms of robustness and transparency to various attacks such as median filtering, Gaussian noise, JPEG compression and geometrical transformations.

Keywords: Watermarking, wavelet transform, edge detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2303
5519 A Pairwise-Gaussian-Merging Approach: Towards Genome Segmentation for Copy Number Analysis

Authors: Chih-Hao Chen, Hsing-Chung Lee, Qingdong Ling, Hsiao-Jung Chen, Sun-Chong Wang, Li-Ching Wu, H.C. Lee

Abstract:

Segmentation, filtering out of measurement errors and identification of breakpoints are integral parts of any analysis of microarray data for the detection of copy number variation (CNV). Existing algorithms designed for these tasks have had some successes in the past, but they tend to be O(N2) in either computation time or memory requirement, or both, and the rapid advance of microarray resolution has practically rendered such algorithms useless. Here we propose an algorithm, SAD, that is much faster and much less thirsty for memory – O(N) in both computation time and memory requirement -- and offers higher accuracy. The two key ingredients of SAD are the fundamental assumption in statistics that measurement errors are normally distributed and the mathematical relation that the product of two Gaussians is another Gaussian (function). We have produced a computer program for analyzing CNV based on SAD. In addition to being fast and small it offers two important features: quantitative statistics for predictions and, with only two user-decided parameters, ease of use. Its speed shows little dependence on genomic profile. Running on an average modern computer, it completes CNV analyses for a 262 thousand-probe array in ~1 second and a 1.8 million-probe array in 9 seconds

Keywords: Cancer, pathogenesis, chromosomal aberration, copy number variation, segmentation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
5518 Application of Computational Methods Mm2 and Gussian for Studing Unimolecular Decomposition of Vinil Ethers based on the Mechanism of Hydrogen Bonding

Authors: Behnaz Shahrokh, Garnik N. Sargsyan, Arkadi B. Harutyunyan

Abstract:

Investigations of the unimolecular decomposition of vinyl ethyl ether (VEE), vinyl propyl ether (VPE) and vinyl butyl ether (VBE) have shown that activation of the molecule of a ether results in formation of a cyclic construction - the transition state (TS), which may lead to the displacement of the thermodynamic equilibrium towards the reaction products. The TS is obtained by applying energy minimization relative to the ground state of an ether under the program MM2 when taking into account the hydrogen bond formation between a hydrogen atom of alkyl residue and the extreme atom of carbon of the vinyl group. The dissociation of TS up to the products is studied by energy minimization procedure using the mathematical program Gaussian. The obtained calculation data for VEE testify that the decomposition of this ether may be conditioned by hydrogen bond formation for two possible versions: when α- or β- hydrogen atoms of the ethyl group are bound to carbon atom of the vinyl group. Applying the same calculation methods to other ethers (VPE and VBE) it is shown that only in the case of hydrogen bonding between α-hydrogen atom of the alkyl residue and the extreme atom of carbon of the vinyl group (αH---C) results in decay of theses ethers.

Keywords: Gaussian, MM2, ethers, TS, decomposition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1186
5517 Moving Area Filter to Detect Object in Video Sequence from Moving Platform

Authors: Sallama Athab, Hala Bahjat

Abstract:

Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.

Keywords: Background Removal, Correlation, Mixture Module Gaussian, Moving Platform, Object Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
5516 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon

Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi

Abstract:

The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.

Keywords: Bhagwat-Gambhir-Patil density, coulomb modified Glauber model, halo nucleus, optical limit approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651
5515 Comparison of Detrending Methods in Spectral Analysis of Heart Rate Variability

Authors: Liping Li, Changchun Liu, Ke Li, Chengyu Liu

Abstract:

Non-stationary trend in R-R interval series is considered as a main factor that could highly influence the evaluation of spectral analysis. It is suggested to remove trends in order to obtain reliable results. In this study, three detrending methods, the smoothness prior approach, the wavelet and the empirical mode decomposition, were compared on artificial R-R interval series with four types of simulated trends. The Lomb-Scargle periodogram was used for spectral analysis of R-R interval series. Results indicated that the wavelet method showed a better overall performance than the other two methods, and more time-saving, too. Therefore it was selected for spectral analysis of real R-R interval series of thirty-seven healthy subjects. Significant decreases (19.94±5.87% in the low frequency band and 18.97±5.78% in the ratio (p<0.001)) were found. Thus the wavelet method is recommended as an optimal choice for use.

Keywords: empirical mode decomposition, heart rate variability, signal detrending, smoothness priors, wavelet

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
5514 Simulation of Co2 Capture Process

Authors: K. Movagharnejad, M. Akbari

Abstract:

Carbon dioxide capture process has been simulated and studied under different process conditions. It has been shown that several process parameters such as lean amine temperature, number of adsorber stages, number of stripper stages and stripper pressure affect different process conditions and outputs such as carbon dioxide removal and reboiler duty. It may be concluded that the simulation of carbon dioxide capture process can help to estimate the best process conditions.

Keywords: Absorption, carbon dioxide capture, desorption, process simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3137
5513 Animal-Assisted Therapy for Persons with Disabilities Based on Canine Tail Language Interpretation via Gaussian-Trapezoidal Fuzzy Emotional Behavior Model

Authors: W. Phanwanich, O. Kumdee, P. Ritthipravat, Y. Wongsawat

Abstract:

In order to alleviate the mental and physical problems of persons with disabilities, animal-assisted therapy (AAT) is one of the possible modalities that employs the merit of the human-animal interaction. Nevertheless, to achieve the purpose of AAT for persons with severe disabilities (e.g. spinal cord injury, stroke, and amyotrophic lateral sclerosis), real-time animal language interpretation is desirable. Since canine behaviors can be visually notable from its tail, this paper proposes the automatic real-time interpretation of canine tail language for human-canine interaction in the case of persons with severe disabilities. Canine tail language is captured via two 3-axis accelerometers. Directions and frequencies are selected as our features of interests. The novel fuzzy rules based on Gaussian-Trapezoidal model and center of gravity (COG)-based defuzzification method are proposed in order to interpret the features into four canine emotional behaviors, i.e., agitate, happy, scare and neutral as well as its blended emotional behaviors. The emotional behavior model is performed in the simulated dog and has also been evaluated in the real dog with the perfect recognition rate.

Keywords: Animal-assisted therapy (AAT), Persons with disabilities, Canine tail language, Fuzzy emotional behavior model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
5512 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: Computational social science, movie preference, machine learning, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
5511 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
5510 Vessel Inscribed Trigonometry to Measure the Vessel Progressive Orientations in the Digital Fundus Image

Authors: Pil Un Kim, Yunjung Lee, Gihyoun Lee, Jin Ho Cho, Myoung Nam Kim

Abstract:

In this paper, the vessel inscribed trigonometry (VITM) for the vessel progression orientation (VPO) is proposed in the two-dimensional fundus image. The VPO is a major factor in the optic disc (OD) detection which is a basic process in the retina analysis. To measure the VPO, skeletons of vessel are used. First, the vessels are classified into three classes as vessel end, vessel branch and vessel stem. And the chain code maps of VS are generated. Next, two farthest neighborhoods of each point on VS are searched by the proposed angle restriction. Lastly, a gradient of the straight line between two farthest neighborhoods is estimated to measure the VPO. VITM is validated by comparing with manual results and 2D Gaussian templates. It is confirmed that VPO of the proposed mensuration is correct enough to detect OD from the results of experiment which applied VITM to detect OD in fundus images.

Keywords: Angle measurement, Optic disc, Retina vessel, Vessel progression orientation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
5509 Frame and Burst Acquisition in TDMA Satellite Communication Networks with Transponder Hopping

Authors: Vitalice K. Oduol, C. Ardil

Abstract:

The paper presents frame and burst acquisition in a satellite communication network based on time division multiple access (TDMA) in which the transmissions may be carried on different transponders. A unique word pattern is used for the acquisition process. The search for the frame is aided by soft-decision of QPSK modulated signals in an additive white Gaussian channel. Results show that when the false alarm rate is low the probability of detection is also low, and the acquisition time is long. Conversely when the false alarm rate is high, the probability of detection is also high and the acquisition time is short. Thus the system operators can trade high false alarm rates for high detection probabilities and shorter acquisition times.

Keywords: burst acquisition, burst time plan, frame acquisition, satellite access, satellite TDMA, unique word detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9104