Search results for: Laplacian of Gaussian
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 337

Search results for: Laplacian of Gaussian

97 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 121
96 Thermal Radiation Effect on Mixed Convection Boundary Layer Flow over a Vertical Plate with Varying Density and Volumetric Expansion Coefficient

Authors: Sadia Siddiqa, Z. Khan, M. A. Hossain

Abstract:

In this article, the effect of thermal radiation on mixed convection boundary layer flow of a viscous fluid along a highly heated vertical flat plate is considered with varying density and volumetric expansion coefficient. The density of the fluid is assumed to vary exponentially with temperature, however; volumetric expansion coefficient depends linearly on temperature. Boundary layer equations are transformed into convenient form by introducing primitive variable formulations. Solutions of transformed system of equations are obtained numerically through implicit finite difference method along with Gaussian elimination technique. Results are discussed in view of various parameters, like thermal radiation parameter, volumetric expansion parameter and density variation parameter on the wall shear stress and heat transfer rate. It is concluded from the present investigation that increase in volumetric expansion parameter decreases wall shear stress and enhances heat transfer rate.

Keywords: thermal radiation, mixed convection, variable density, variable volumetric expansion coefficient

Procedia PDF Downloads 343
95 Distribution of Maximum Loss of Fractional Brownian Motion with Drift

Authors: Ceren Vardar Acar, Mine Caglar

Abstract:

In finance, the price of a volatile asset can be modeled using fractional Brownian motion (fBm) with Hurst parameter H>1/2. The Black-Scholes model for the values of returns of an asset using fBm is given as, 〖Y_t=Y_0 e^((r+μ)t+σB)〗_t^H, 0≤t≤T where Y_0 is the initial value, r is constant interest rate, μ is constant drift and σ is constant diffusion coefficient of fBm, which is denoted by B_t^H where t≥0. Black-Scholes model can be constructed with some Markov processes such as Brownian motion. The advantage of modeling with fBm to Markov processes is its capability of exposing the dependence between returns. The real life data for a volatile asset display long-range dependence property. For this reason, using fBm is a more realistic model compared to Markov processes. Investors would be interested in any kind of information on the risk in order to manage it or hedge it. The maximum possible loss is one way to measure highest possible risk. Therefore, it is an important variable for investors. In our study, we give some theoretical bounds on the distribution of maximum possible loss of fBm. We provide both asymptotical and strong estimates for the tail probability of maximum loss of standard fBm and fBm with drift and diffusion coefficients. In the investment point of view, these results explain, how large values of possible loss behave and its bounds.

Keywords: maximum drawdown, maximum loss, fractional brownian motion, large deviation, Gaussian process

Procedia PDF Downloads 458
94 Spatiotemporal Analysis of Land Surface Temperature and Urban Heat Island Evaluation of Four Metropolitan Areas of Texas, USA

Authors: Chunhong Zhao

Abstract:

Remotely sensed land surface temperature (LST) is vital to understand the land-atmosphere energy balance, hydrological cycle, and thus is widely used to describe the urban heat island (UHI) phenomenon. However, due to technical constraints, satellite thermal sensors are unable to provide LST measurement with both high spatial and high temporal resolution. Despite different downscaling techniques and algorithms to generate high spatiotemporal resolution LST. Four major metropolitan areas in Texas, USA: Dallas-Fort Worth, Houston, San Antonio, and Austin all demonstrate UHI effects. Different cities are expected to have varying SUHI effect during the urban development trajectory. With the help of the Landsat, ASTER, and MODIS archives, this study focuses on the spatial patterns of UHIs and the seasonal and annual variation of these metropolitan areas. With Gaussian model, and Local Indicators of Spatial Autocorrelations (LISA), as well as data fusion methods, this study identifies the hotspots and the trajectory of the UHI phenomenon of the four cities. By making comparison analysis, the result can help to alleviate the advent effect of UHI and formulate rational urban planning in the long run.

Keywords: spatiotemporal analysis, land surface temperature, urban heat island evaluation, metropolitan areas of Texas, USA

Procedia PDF Downloads 375
93 Superficial Metrology of Organometallic Chemical Vapour Deposited Undoped ZnO Thin Films on Stainless Steel and Soda-Lime Glass Substrates

Authors: Uchenna Sydney Mbamara, Bolu Olofinjana, Ezekiel Oladele B. Ajayi

Abstract:

Elaborate surface metrology of undoped ZnO thin films, deposited by organometallic chemical vapour deposition (OMCVD) technique at different precursor flow rates, was carried out. Dicarbomethyl-zinc precursor was used. The films were deposited on AISI304L steel and soda-lime glass substrates. Ultraviolet-visible-near-infrared (UV-Vis-NIR) spectroscopy showed that all the thin films were over 80% transparent, with an average bandgap of 3.39 eV, X-ray diffraction (XRD) results showed that the thin films were crystalline with a hexagonal structure, while Rutherford backscattering spectroscopy (RBS) results identified the elements present in each thin film as zinc and oxygen in the ratio of 1:1. Microscope and contactless profilometer results gave images with characteristic colours. The profilometer also gave the surface roughness data in both 2D and 3D. The asperity distribution of the thin film surfaces was Gaussian, while the average fractal dimension Da was in the range of 2.5 ≤ Da. The metrology proved the surfaces good for ‘touch electronics’ and coating mechanical parts for low friction.

Keywords: undoped ZnO, precursor flow rate, OMCVD, thin films, surface texture, tribology

Procedia PDF Downloads 37
92 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 161
91 Predicting Relative Performance of Sector Exchange Traded Funds Using Machine Learning

Authors: Jun Wang, Ge Zhang

Abstract:

Machine learning has been used in many areas today. It thrives at reviewing large volumes of data and identifying patterns and trends that might not be apparent to a human. Given the huge potential benefit and the amount of data available in the financial market, it is not surprising to see machine learning applied to various financial products. While future prices of financial securities are extremely difficult to forecast, we study them from a different angle. Instead of trying to forecast future prices, we apply machine learning algorithms to predict the direction of future price movement, in particular, whether a sector Exchange Traded Fund (ETF) would outperform or underperform the market in the next week or in the next month. We apply several machine learning algorithms for this prediction. The algorithms are Linear Discriminant Analysis (LDA), k-Nearest Neighbors (KNN), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Neural Networks (NN). We show that these machine learning algorithms, most notably GNB and NN, have some predictive power in forecasting out-performance and under-performance out of sample. We also try to explore whether it is possible to utilize the predictions from these algorithms to outperform the buy-and-hold strategy of the S&P 500 index. The trading strategy to explore out-performance predictions does not perform very well, but the trading strategy to explore under-performance predictions can earn higher returns than simply holding the S&P 500 index out of sample.

Keywords: machine learning, ETF prediction, dynamic trading, asset allocation

Procedia PDF Downloads 55
90 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 136
89 Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement

Authors: Gheida J. Shahrour, Martin J. Russell

Abstract:

The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively.

Keywords: person recognition, topic recognition, culture recognition, 3D body movement signals, variability compensation

Procedia PDF Downloads 509
88 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing

Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor

Abstract:

This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.

Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing

Procedia PDF Downloads 293
87 Conventional and Computational Investigation of the Synthesized Organotin(IV) Complexes Derived from o-Vanillin and 3-Nitro-o-Phenylenediamine

Authors: Harminder Kaur, Manpreet Kaur, Akanksha Kapila, Reenu

Abstract:

Schiff base with general formula H₂L was derived from condensation of o-vanillin and 3-nitro-o-phenylenediamine. This Schiff base was used for the synthesis of organotin(IV) complexes with general formula R₂SnL [R=Phenyl or n-octyl] using equimolar quantities. Elemental analysis UV-Vis, FTIR, and multinuclear spectroscopic techniques (¹H, ¹³C, and ¹¹⁹Sn) NMR were carried out for the characterization of the synthesized complexes. These complexes were coloured and soluble in polar solvents. Computational studies have been performed to obtain the details of the geometry and electronic structures of ligand as well as complexes. Geometry of the ligands and complexes have been optimized at the level of Density Functional Theory with B3LYP/6-311G (d,p) and B3LYP/MPW1PW91 respectively followed by vibrational frequency analysis using Gaussian 09. Observed ¹¹⁹Sn NMR chemical shifts of one of the synthesized complexes showed tetrahedral geometry around Tin atom which is also confirmed by DFT. HOMO-LUMO energy distribution was calculated. FTIR, ¹HNMR and ¹³CNMR spectra were also obtained theoretically using DFT. Further IRC calculations were employed to determine the transition state for the reaction and to get the theoretical information about the reaction pathway. Moreover, molecular docking studies can be explored to ensure the anticancer activity of the newly synthesized organotin(IV) complexes.

Keywords: DFT, molecular docking, organotin(IV) complexes, o-vanillin, 3-nitro-o-phenylenediamine

Procedia PDF Downloads 126
86 Modeling and Simulation of Organic Solar Cells Based on P3HT:PCBM using SCAPS 1-D (Influence of Defects and Temperature on the Performance of the Solar Cell)

Authors: Souhila Boukli Hacene, Djamila Kherbouche, Abdelhak Chikhaoui

Abstract:

In this work, we elucidate theoretically the effect of defects and temperature on the performance of the organic bulk heterojunction solar cell (BHJ) P3HT: PCBM. We have studied the influence of their parameters on cell characteristics. For this purpose, we used the effective medium model and the solar cell simulator (SCAPS) to model the characteristics of the solar cell. We also explore the transport of charge carriers in the device. It was assumed that the mixture is lightly p-type doped and that the band gap contains acceptor defects near the HOMO level with a Gaussian distribution of energy states at 100 and 50 meV. We varied defects density between 1012-1017 cm-3, from 1016 cm-3, a total decrease of the photovoltaic characteristics due to the increase of the non-radiative recombination can be noticed. Then we studied the effect of variation of the electron and the hole capture cross-section on the cell’s performance, we noticed that the cell obtains a better efficiency of about 3.6% for an electron capture cross section ≤ 10-15 cm2 and a hole capture cross section ≤ 10-19 cm2. On the other hand, we also varied the temperature between 120K and 400K. We observed that the temperature of the solar cell induces a noticeable effect on its voltage. While the effect of temperature on the solar cell current is negligible.

Keywords: organic solar cell, P3HT:PCBM, defects, temperature, SCAPS

Procedia PDF Downloads 50
85 Dynamic Distribution Calibration for Improved Few-Shot Image Classification

Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran

Abstract:

Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.

Keywords: deep learning, computer vision, image classification, few-shot learning, threshold

Procedia PDF Downloads 26
84 Theoretical Study of Structural Parameters, Chemical Reactivity and Spectral and Thermodynamical Properties of Organometallic Complexes Containing Zinc, Nickel and Cadmium with Nitrilotriacetic Acid and Tea Ligands: Density Functional Theory Investigation

Authors: Nour El Houda Bensiradj, Nafila Zouaghi, Taha Bensiradj

Abstract:

The pollution of water resources is characterized by the presence of microorganisms, chemicals, or industrial waste. Generally, this waste generates effluents containing large quantities of heavy metals, making the water unsuitable for consumption and causing the death of aquatic life and associated biodiversity. Currently, it is very important to assess the impact of heavy metals in water pollution as well as the processes for treating and reducing them. Among the methods of water treatment and disinfection, we mention the complexation of metal ions using ligands which serve to precipitate and subsequently eliminate these ions. In this context, we are interested in the study of complexes containing heavy metals such as zinc, nickel, and cadmium, which are present in several industrial discharges and are discharged into water sources. We will use the ligands of triethanolamine (TEA) and nitrilotriacetic acid (NTA). The theoretical study is based on molecular modeling, using the density functional theory (DFT) implemented in the Gaussian 09 program. The geometric and energetic properties of the above complexes will be calculated. Spectral properties such as infrared, as well as reactivity descriptors, and thermodynamic properties such as enthalpy and free enthalpy will also be determined.

Keywords: heavy metals, NTA, TEA, DFT, IR, reactivity descriptors

Procedia PDF Downloads 67
83 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 482
82 Brain Tumor Segmentation Based on Minimum Spanning Tree

Authors: Simeon Mayala, Ida Herdlevær, Jonas Bull Haugsøen, Shamundeeswari Anandan, Sonia Gavasso, Morten Brun

Abstract:

In this paper, we propose a minimum spanning tree-based method for segmenting brain tumors. The proposed method performs interactive segmentation based on the minimum spanning tree without tuning parameters. The steps involve preprocessing, making a graph, constructing a minimum spanning tree, and a newly implemented way of interactively segmenting the region of interest. In the preprocessing step, a Gaussian filter is applied to 2D images to remove the noise. Then, the pixel neighbor graph is weighted by intensity differences and the corresponding minimum spanning tree is constructed. The image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the minimum spanning tree into two trees. One of these trees represents the region of interest and the other represents the background. Finally, the segmentation given by the two trees is visualized. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The comparison between our results and the standard gold segmentation confirmed the validity of the minimum spanning tree approach. The proposed method is simple to implement and the results indicate that it is accurate and efficient.

Keywords: brain tumor, brain tumor segmentation, minimum spanning tree, segmentation, image processing

Procedia PDF Downloads 94
81 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation-Based Approach

Authors: Sujoy Das, M. M. Ghosh

Abstract:

The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solid-solid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulse-like pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.

Keywords: brownian dynamics, molecular dynamics, nanofluid, thermal conductivity

Procedia PDF Downloads 351
80 Electronic Structure Calculation of AsSiTeB/SiAsBTe Nanostructures Using Density Functional Theory

Authors: Ankit Kargeti, Ravikant Shrivastav, Tabish Rasheed

Abstract:

The electronic structure calculation for the nanoclusters of AsSiTeB/SiAsBTe quaternary semiconductor alloy belonging to the III-V Group elements was performed. Motivation for this research work was to look for accurate electronic and geometric data of small nanoclusters of AsSiTeB/SiAsBTe in the gaseous form. The two clusters, one in the linear form and the other in the bent form, were studied under the framework of Density Functional Theory (DFT) using the B3LYP functional and LANL2DZ basis set with the software packaged Gaussian 16. We have discussed the Optimized Energy, Frontier Orbital Energy Gap in terms of HOMO-LUMO, Dipole Moment, Ionization Potential, Electron Affinity, Binding Energy, Embedding Energy, Density of States (DoS) spectrum for both structures. The important findings of the predicted nanostructures are that these structures have wide band gap energy, where linear structure has band gap energy (Eg) value is 2.375 eV and bent structure (Eg) value is 2.778 eV. Therefore, these structures can be utilized as wide band gap semiconductors. These structures have high electron affinity value of 4.259 eV for the linear structure and electron affinity value of 3.387 eV for the bent structure form. It shows that electron acceptor capability is high for both forms. The widely known application of these compounds is in the light emitting diodes due to their wide band gap nature.

Keywords: density functional theory, DFT, density functional theory, nanostructures, HOMO-LUMO, density of states

Procedia PDF Downloads 85
79 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 104
78 Patented Free-Space Optical System for Auto Aligned Optical Beam Allowing to Compensate Mechanical Misalignments

Authors: Aurelien Boutin

Abstract:

In optical systems such as Variable Optical Delay Lines, where a collimated beam has to go back and forth, corner cubes are used in order to keep the reflected beam parallel to the incoming beam. However, the reflected beam can be laterally shifted, which will lead to losses. In this paper, we report on a patented optical design that allows keeping the reflected beam with the exact same position and direction whatever the displacement of the corner cube leading to zero losses. After explaining how the optical design works and theoretically allows to compensate for any defects in the translation of the corner cube, we will present the results of experimental comparisons between a standard layout (i.e., only corner cubes) and our optical layout. To compare both optical layouts, we used a fiber-to-fiber coupling setup. It consists of a couple of lights from one fiber to the other, thanks to two lenses. The ensemble [fiber+lense] is fixed and called a collimator so that the light is coupled from one collimator to another. Each collimator was precisely made in order to have a precise working distance. In the experiment, we measured and compared the Insertion Losses (IL) variations between both collimators with the distance between them (i.e., natural Gaussian beam coupling losses) and between both collimators in the different optical layouts tested, with the same optical length propagation. We will show that the IL variations of our setup are less than 0.05dB with respect to the IL variations of collimators alone.

Keywords: free-space optics, variable optical delay lines, optical cavity, auto-alignment

Procedia PDF Downloads 63
77 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport

Procedia PDF Downloads 409
76 Simulation of Laser Structuring by Three Dimensional Heat Transfer Model

Authors: Bassim Shaheen Bachy, Jörg Franke

Abstract:

In this study, a three dimensional numerical heat transfer model has been used to simulate the laser structuring of polymer substrate material in the Three-Dimensional Molded Interconnect Device (3D MID) which is used in the advanced multi-functional applications. A finite element method (FEM) transient thermal analysis is performed using APDL (ANSYS Parametric Design Language) provided by ANSYS. In this model, the effect of surface heat source was modeled with Gaussian distribution, also the effect of the mixed boundary conditions which consist of convection and radiation heat transfers have been considered in this analysis. The model provides a full description of the temperature distribution, as well as calculates the depth and the width of the groove upon material removal at different set of laser parameters such as laser power and laser speed. This study also includes the experimental procedure to study the effect of laser parameters on the depth and width of the removal groove metal as verification to the modeled results. Good agreement between the experimental and the model results is achieved for a wide range of laser powers. It is found that the quality of the laser structure process is affected by the laser scan speed and laser power. For a high laser structured quality, it is suggested to use laser with high speed and moderate to high laser power.

Keywords: laser structuring, simulation, finite element analysis, thermal modeling

Procedia PDF Downloads 311
75 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 261
74 Reducing Hazardous Materials Releases from Railroad Freights through Dynamic Trip Plan Policy

Authors: Omar A. Abuobidalla, Mingyuan Chen, Satyaveer S. Chauhan

Abstract:

Railroad transportation of hazardous materials freights is important to the North America economics that supports the national’s supply chain. This paper introduces various extensions of the dynamic hazardous materials trip plan problems. The problem captures most of the operational features of a real-world railroad transportations systems that dynamically initiates a set of blocks and assigns each shipment to a single block path or multiple block paths. The dynamic hazardous materials trip plan policies have distinguishing features that are integrating the blocking plan, and the block activation decisions. We also present a non-linear mixed integer programming formulation for each variant and present managerial insights based on a hypothetical railroad network. The computation results reveal that the dynamic car scheduling policies are not only able to take advantage of the capacity of the network but also capable of diminishing the population, and environment risks by rerouting the active blocks along the least risky train services without sacrificing the cost advantage of the railroad. The empirical results of this research illustrate that the issue of integrating the blocking plan, and the train makeup of the hazardous materials freights must receive closer attentions.

Keywords: dynamic car scheduling, planning and scheduling hazardous materials freights, airborne hazardous materials, gaussian plume model, integrated blocking and routing plans, box model

Procedia PDF Downloads 185
73 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 198
72 Urban Park Green Space Planning and Construction under the Theory of Environmental Justice

Authors: Ma Chaoyang

Abstract:

This article starts from the perspective of environmental justice theory and analyzes the accessibility and regional equity of park green spaces in the central urban area of Chengdu in 2022 based on the improved Gaussian 2SFCA analysis method and Gini coefficient method. Then, according to the relevant analysis model, it further explores the correlation between the spatial distribution of park green spaces and the socio-economic conditions of residents in order to provide a reference for the construction and research of Chengdu's park city under the guidance of fairness and justice. The results show that: (1) Overall, the spatial distribution of parks and green spaces in Chengdu shows a significantly uneven distribution of extreme core edge, with a certain degree of unfairness; that is, there is an environmental injustice pattern. (2) The spatial layout of urban parks and green spaces is subject to strong guiding interference from the socio-economic level; that is, there is a high correlation between housing prices and the tendency of parks. (3) Green space resources Gini coefficient analysis shows that residents of the three modes of transportation in the study area have unequal opportunities to enjoy park and green space services, and the degree of unfairness in walking is much greater than that in cycling and cycling.

Keywords: parks and green spaces, environmental justice, two step mobile search method, Gini coefficient, spatial distribution

Procedia PDF Downloads 6
71 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 423
70 Synthesis, Structural, Spectroscopic and Nonlinear Optical Properties of New Picolinate Complex of Manganese (II) Ion

Authors: Ömer Tamer, Davut Avcı, Yusuf Atalay

Abstract:

Novel picolinate complex of manganese(II) ion, [Mn(pic)2] [pic: picolinate or 2-pyridinecarboxylate], was prepared and fully characterized by single crystal X-ray structure determination. The manganese(II) complex was characterized by FT-IR, FT-Raman and UV–Vis spectroscopic techniques. The C=O, C=N and C=C stretching vibrations were found to be strong and simultaneously active in IR and spectra. In order to support these experimental techniques, density functional theory (DFT) calculations were performed at Gaussian 09W. Although the supramolecular interactions have some influences on the molecular geometry in solid state phase, the calculated data show that the predicted geometries can reproduce the structural parameters. The molecular modeling and calculations of IR, Raman and UV-vis spectra were performed by using DFT levels. Nonlinear optical (NLO) properties of synthesized complex were evaluated by the determining of dipole moment (µ), polarizability (α) and hyperpolarizability (β). Obtained results demonstrated that the manganese(II) complex is a good candidate for NLO material. Stability of the molecule arising from hyperconjugative interactions and charge delocalization was analyzed using natural bond orbital (NBO) analysis. The highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO) which is also known the frontier molecular orbitals were simulated, and obtained energy gap confirmed that charge transfer occurs within manganese(II) complex. Molecular electrostatic potential (MEP) for synthesized manganese(II) complex displays the electrophilic and nucleophilic regions. From MEP, the the most negative region is located over carboxyl O atoms while positive region is located over H atoms.

Keywords: DFT, picolinate, IR, Raman, nonlinear optic

Procedia PDF Downloads 464
69 Influence of Vibration Amplitude on Reaction Time and Drowsiness Level

Authors: Mohd A. Azizan, Mohd Z. Zali

Abstract:

It is well established that exposure to vibration has an adverse effect on human health, comfort, and performance. However, there is little quantitative knowledge on performance combined with drowsiness level during vibration exposure. This paper reports a study investigating the influence of vibration amplitude on seated occupant reaction time and drowsiness level. Eighteen male volunteers were recruited for this experiment. Before commencing the experiment, total transmitted acceleration measured at interfaces between the seat pan and seatback to human body was adjusted to become 0.2 ms-2 r.m.s and 0.4 ms-2 r.m.s for each volunteer. Seated volunteers were exposed to Gaussian random vibration with frequency band 1-15 Hz at two level of amplitude (low vibration amplitude and medium vibration amplitude) for 20-minutes in separate days. For the purpose of drowsiness measurement, volunteers were asked to complete 10-minutes PVT test before and after vibration exposure and rate their subjective drowsiness by giving score using Karolinska Sleepiness Scale (KSS) before vibration, every 5-minutes interval and following 20-minutes of vibration exposure. Strong evidence of drowsiness was found as there was a significant increase in reaction time and number of lapse following exposure to vibration in both conditions. However, the effect is more apparent in medium vibration amplitude. A steady increase of drowsiness level can also be observed in KSS in all volunteers. However, no significant differences were found in KSS between low vibration amplitude and medium vibration amplitude. It is concluded that exposure to vibration has an adverse effect on human alertness level and more pronounced at higher vibration amplitude. Taken together, these findings suggest a role of vibration in promoting drowsiness, especially at higher vibration amplitude.

Keywords: drowsiness, human vibration, karolinska sleepiness scale, psychomotor vigilance test

Procedia PDF Downloads 248
68 Development of Star Image Simulator for Star Tracker Algorithm Validation

Authors: Zoubida Mahi

Abstract:

A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.

Keywords: star tracker, star simulation, star detection, centroid, noise, scenario

Procedia PDF Downloads 55