Search results for: stochastic signals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1437

Search results for: stochastic signals

1257 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
1256 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 285
1255 Evaluation of the Electric Vehicle Impact in Distribution System

Authors: Sania Maghsodloo, Sirus Mohammadi

Abstract:

Electric Vehicle (EV) technology is expected to take a major share in the light-vehicle market in the coming decades. Transportation electrification has become an important issue in recent decades and the large scale deployment of EVs has yet to be achieved. The smart coordination of EV demand addresses an improvement in the flexibility of power systems and reduces the costs of power system investment. The uncertainty in EV drivers’ behaviour is one of the main problems to solve to obtain an optimal integration of EVs into power systems Charging of EVs will put an extra burden on the distribution grid and in some cases adjustments will need to be made. The stochastic process of the driving pattern is done to make the outcome of the project more realistic. Based on the stochastic data, the optimization of charging plans is made.

Keywords: electric vehicles (PEVs), smart grid, Monticello, distribution system

Procedia PDF Downloads 552
1254 Updating Stochastic Hosting Capacity Algorithm for Voltage Optimization Programs and Interconnect Standards

Authors: Nicholas Burica, Nina Selak

Abstract:

The ADHCAT (Automated Distribution Hosting Capacity Assessment Tool) was designed to run Hosting Capacity Analysis on the ComEd system via a stochastic DER (Distributed Energy Resource) placement on multiple power flow simulations against a set of violation criteria. The violation criteria in the initial version of the tool captured a limited amount of issues that individual departments design against for DER interconnections. Enhancements were made to the tool to further align with individual department violation and operation criteria, as well as the addition of new modules for use for future load profile analysis. A reporting engine was created for future analytical use based on the simulations and observations in the tool.

Keywords: distributed energy resources, hosting capacity, interconnect, voltage optimization

Procedia PDF Downloads 190
1253 Roasting Degree of Cocoa Beans by Artificial Neural Network (ANN) Based Electronic Nose System and Gas Chromatography (GC)

Authors: Juzhong Tan, William Kerr

Abstract:

Roasting is one critical procedure in chocolate processing, where special favors are developed, moisture content is decreased, and better processing properties are developed. Therefore, determination of roasting degree of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products, and it also decides the commercial value of cocoa beans collected from cocoa farmers. The roasting degree of cocoa beans currently relies on human specialists, who sometimes are biased, and chemical analysis, which take long time and are inaccessible to many manufacturers and farmers. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was used to detecting the gas generated by cocoa beans with a different roasting degree (0min, 20min, 30min, and 40min) and the signals collected by gas sensors were used to train a three-layers ANN. Chemical analysis of the graded beans was operated by traditional GC-MS system and the contents of volatile chemical compounds were used to train another ANN as a reference to electronic nosed signals trained ANN. Both trained ANN were used to predict cocoa beans with a different roasting degree for validation. The best accuracy of grading achieved by electronic nose signals trained ANN (using signals from TGS 813 826 820 880 830 2620 2602 2610) turned out to be 96.7%, however, the GC trained ANN got the accuracy of 83.8%.

Keywords: artificial neutron network, cocoa bean, electronic nose, roasting

Procedia PDF Downloads 234
1252 Multi-Layer Perceptron and Radial Basis Function Neural Network Models for Classification of Diabetic Retinopathy Disease Using Video-Oculography Signals

Authors: Ceren Kaya, Okan Erkaymaz, Orhan Ayar, Mahmut Özer

Abstract:

Diabetes Mellitus (Diabetes) is a disease based on insulin hormone disorders and causes high blood glucose. Clinical findings determine that diabetes can be diagnosed by electrophysiological signals obtained from the vital organs. 'Diabetic Retinopathy' is one of the most common eye diseases resulting on diabetes and it is the leading cause of vision loss due to structural alteration of the retinal layer vessels. In this study, features of horizontal and vertical Video-Oculography (VOG) signals have been used to classify non-proliferative and proliferative diabetic retinopathy disease. Twenty-five features are acquired by using discrete wavelet transform with VOG signals which are taken from 21 subjects. Two models, based on multi-layer perceptron and radial basis function, are recommended in the diagnosis of Diabetic Retinopathy. The proposed models also can detect level of the disease. We show comparative classification performance of the proposed models. Our results show that proposed the RBF model (100%) results in better classification performance than the MLP model (94%).

Keywords: diabetic retinopathy, discrete wavelet transform, multi-layer perceptron, radial basis function, video-oculography (VOG)

Procedia PDF Downloads 259
1251 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 145
1250 Field-Programmable Gate Array Based Tester for Protective Relay

Authors: H. Bentarzi, A. Zitouni

Abstract:

The reliability of the power grid depends on the successful operation of thousands of protective relays. The failure of one relay to operate as intended may lead the entire power grid to blackout. In fact, major power system failures during transient disturbances may be caused by unnecessary protective relay tripping rather than by the failure of a relay to operate. Adequate relay testing provides a first defense against false trips of the relay and hence improves power grid stability and prevents catastrophic bulk power system failures. The goal of this research project is to design and enhance the relay tester using a technology such as Field Programmable Gate Array (FPGA) card NI 7851. A PC based tester framework has been developed using Simulink power system model for generating signals under different conditions (faults or transient disturbances) and LabVIEW for developing the graphical user interface and configuring the FPGA. Besides, the interface system has been developed for outputting and amplifying the signals without distortion. These signals should be like the generated ones by the real power system and large enough for testing the relay’s functionality. The signals generated that have been displayed on the scope are satisfactory. Furthermore, the proposed testing system can be used for improving the performance of protective relay.

Keywords: amplifier class D, field-programmable gate array (FPGA), protective relay, tester

Procedia PDF Downloads 216
1249 The Non-Uniqueness of Partial Differential Equations Options Price Valuation Formula for Heston Stochastic Volatility Model

Authors: H. D. Ibrahim, H. C. Chinwenyi, T. Danjuma

Abstract:

An option is defined as a financial contract that provides the holder the right but not the obligation to buy or sell a specified quantity of an underlying asset in the future at a fixed price (called a strike price) on or before the expiration date of the option. This paper examined two approaches for derivation of Partial Differential Equation (PDE) options price valuation formula for the Heston stochastic volatility model. We obtained various PDE option price valuation formulas using the riskless portfolio method and the application of Feynman-Kac theorem respectively. From the results obtained, we see that the two derived PDEs for Heston model are distinct and non-unique. This establishes the fact of incompleteness in the model for option price valuation.

Keywords: Black-Scholes partial differential equations, Ito process, option price valuation, partial differential equations

Procedia PDF Downloads 145
1248 Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques

Authors: Sachin Singh, Thomas Penzel, Dinesh Nandan

Abstract:

Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5.

Keywords: neural network, AHI, statistical methods, autoregressive models

Procedia PDF Downloads 119
1247 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals

Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang

Abstract:

Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.

Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation

Procedia PDF Downloads 323
1246 Stochastic Response of an Airfoil and Its Effects on Limit Cycle Oscillations’ Behavior under Stall Flutter Regime

Authors: Ketseas Dimitris

Abstract:

In this work, we investigate the effect of noise on a classical two-degree-of-freedom pitch-plunge aeroelastic system. The inlet velocity of the flow is modelled as a stochastically varying parameter by the Ornstein-Uhlenbeck (OU) stochastic process. The system is a 2D airfoil, and the elastic problem is simulated using linear springs. We study the manifestation of Limit Cycle Oscillations (LCO) that correspond to the varying fluid velocity under the dynamic stall regime. We aim to delve into the unexplored facets of the classical pitch-plunge aeroelastic system, seeking a comprehensive understanding of how parametric noise influences the occurrence of LCO and expands the boundaries of its known behavior.

Keywords: aerodynamics, aeroelasticity, computational fluid mechanics, stall flutter, stochastical processes, limit cycle oscillation

Procedia PDF Downloads 62
1245 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 341
1244 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid

Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni

Abstract:

In Zambia recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines to upgrade power systems into smart grids target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, there are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we introduce a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.

Keywords: anomaly, availability, detection, edge, maintainability, reliability, stochastic

Procedia PDF Downloads 110
1243 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life

Procedia PDF Downloads 133
1242 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model

Procedia PDF Downloads 428
1241 Forecasting Silver Commodity Prices Using Geometric Brownian Motion: A Stochastic Approach

Authors: Sina Dehghani, Zhikang Rong

Abstract:

Historically, a variety of approaches have been taken to forecast commodity prices due to the significant implications of these values on the global economy. An accurate forecasting tool for a valuable commodity would significantly benefit investors and governmental agencies. Silver, in particular, has grown significantly as a commodity in recent years due to its use in healthcare and technology. This manuscript aims to utilize the Geometric Brownian Motion predictive model to forecast silver commodity prices over multiple 3-year periods. The results of the study indicate that the model has several limitations, particularly its inability to work effectively over longer periods of time, but still was extremely effective over shorter time frames. This study sets a baseline for silver commodity forecasting with GBM, and the model could be further strengthened with refinement.

Keywords: geometric Brownian motion, commodity, risk management, volatility, stochastic behavior, price forecasting

Procedia PDF Downloads 23
1240 Comparison of Linear Discriminant Analysis and Support Vector Machine Classifications for Electromyography Signals Acquired at Five Positions of Elbow Joint

Authors: Amna Khan, Zareena Kausar, Saad Malik

Abstract:

Bio Mechatronics has extended applications in the field of rehabilitation. It has been contributing since World War II in improving the applicability of prosthesis and assistive devices in real life scenarios. In this paper, classification accuracies have been compared for two classifiers against five positions of elbow. Electromyography (EMG) signals analysis have been acquired directly from skeletal muscles of human forearm for each of the three defined positions and at modified extreme positions of elbow flexion and extension using 8 electrode Myo armband sensor. Features were extracted from filtered EMG signals for each position. Performance of two classifiers, support vector machine (SVM) and linear discriminant analysis (LDA) has been compared by analyzing the classification accuracies. SVM illustrated classification accuracies between 90-96%, in contrast to 84-87% depicted by LDA for five defined positions of elbow keeping the number of samples and selected feature the same for both SVM and LDA.

Keywords: classification accuracies, electromyography, linear discriminant analysis (LDA), Myo armband sensor, support vector machine (SVM)

Procedia PDF Downloads 368
1239 Designing Emergency Response Network for Rail Hazmat Shipments

Authors: Ali Vaezi, Jyotirmoy Dalal, Manish Verma

Abstract:

The railroad is one of the primary transportation modes for hazardous materials (hazmat) shipments in North America. Installing an emergency response network capable of providing a commensurate response is one of the primary levers to contain (or mitigate) the adverse consequences from rail hazmat incidents. To this end, we propose a two-stage stochastic program to determine the location of and equipment packages to be stockpiled at each response facility. The raw input data collected from publicly available reports were processed, fed into the proposed optimization program, and then tested on a realistic railroad network in Ontario (Canada). From the resulting analyses, we conclude that the decisions based only on empirical datasets would undermine the effectiveness of the resulting network; coverage can be improved by redistributing equipment in the network, purchasing equipment with higher containment capacity, and making use of a disutility multiplier factor.

Keywords: hazmat, rail network, stochastic programming, emergency response

Procedia PDF Downloads 182
1238 Testing and Validation Stochastic Models in Epidemiology

Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa

Abstract:

This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.

Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions

Procedia PDF Downloads 6
1237 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 50
1236 Global Direct Search Optimization of a Tuned Liquid Column Damper Subject to Stochastic Load

Authors: Mansour H. Alkmim, Adriano T. Fabro, Marcus V. G. De Morais

Abstract:

In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of an undamped primary system under white noise excitation. Finally, a numerical example considering a simplified wind turbine model is given to illustrate the efficacy of the TLCD. Results from the random vibration analysis are shown for four types of random excitation wind model where the response PSDs obtained showed good vibration attenuation.

Keywords: generalized pattern search, parameter optimization, random vibration analysis, vibration suppression

Procedia PDF Downloads 275
1235 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis

Authors: Tefera Kebede Leyu

Abstract:

The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.

Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step

Procedia PDF Downloads 75
1234 Classification of Echo Signals Based on Deep Learning

Authors: Aisulu Tileukulova, Zhexebay Dauren

Abstract:

Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.

Keywords: radar, neural network, convolutional neural network, echo signals

Procedia PDF Downloads 353
1233 Efficiency of Secondary Schools by ICT Intervention in Sylhet Division of Bangladesh

Authors: Azizul Baten, Kamrul Hossain, Abdullah-Al-Zabir

Abstract:

The objective of this study is to develop an appropriate stochastic frontier secondary schools efficiency model by ICT Intervention and to examine the impact of ICT challenges on secondary schools efficiency in the Sylhet division in Bangladesh using stochastic frontier analysis. The Translog stochastic frontier model was found an appropriate than the Cobb-Douglas model in secondary schools efficiency by ICT Intervention. Based on the results of the Cobb-Douglas model, it is found that the coefficient of the number of teachers, the number of students, and teaching ability had a positive effect on increasing the level of efficiency. It indicated that these are related to technical efficiency. In the case of inefficiency effects for both Cobb-Douglas and Translog models, the coefficient of the ICT lab decreased secondary school inefficiency, but the online class in school was found to increase the level of inefficiency. The coefficients of teacher’s preference for ICT tools like multimedia projectors played a contributor role in decreasing the secondary school inefficiency in the Sylhet division of Bangladesh. The interaction effects of the number of teachers and the classrooms, and the number of students and the number of classrooms, the number of students and teaching ability, and the classrooms and teaching ability of the teachers were recorded with the positive values and these have a positive impact on increasing the secondary school efficiency. The overall mean efficiency of urban secondary schools was found at 84.66% for the Translog model, while it was 83.63% for the Cobb-Douglas model. The overall mean efficiency of rural secondary schools was found at 80.98% for the Translog model, while it was 81.24% for the Cobb-Douglas model. So, the urban secondary schools performed better than the rural secondary schools in the Sylhet division. It is observed from the results of the Tobit model that the teacher-student ratio had a positive influence on secondary school efficiency. The teaching experiences of those who have 1 to 5 years and 10 years above, MPO type school, conventional teaching method have had a negative and significant influence on secondary school efficiency. The estimated value of σ-square (0.0625) was different from Zero, indicating a good fit. The value of γ (0.9872) was recorded as positive and it can be interpreted as follows: 98.72 percent of random variation around in secondary school outcomes due to inefficiency.

Keywords: efficiency, secondary schools, ICT, stochastic frontier analysis

Procedia PDF Downloads 151
1232 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 445
1231 Novel Phenolic Biopolyether with Potential Therapeutic Effect

Authors: V.Barbakadze, L.Gogilashvili, L.Amiranashvili, M.Merlani, K.Mulkijanyan

Abstract:

The high-molecular fractions from the several species of two genera (Symphytum and Anchusa) of Boraginaceae family Symphytum asperum, S. caucasicum, S. officinale, and Anchusa italica were isolated. According to IR, 13C and 1H NMR, 2D heteronuclear 1H/13C HSQC spectral data and 1D NOE experiment, the main structural element of these preparations was found to be a regularly substituted polyoxyethylene, namely poly[3-(3,4-dihydroxyenyl)glyceric acid] (PDPGA) or poly[oxy-1-carboxy-2-(3,4-dihydroxyphenyl)ethylene]. Such caffeic acid-derived biopolymer to our knowledge has not been known and has been identified for the first time. This compound represents a new class of natural polyethers with a residue of 3-(3,4-dihydroxyphenyl)glyceric acid as the repeating unit. Most of the carboxylic groups of PDPGA from A. italica unlike the polymer of S. asperum, S. caucasicum, and S. officinale are methylated. The 2D DOSY experiment gave the similar diffusion coefficient for the methylated and non-methylated signals of A. italica PDPGA. Both sets of signals fell in the same horizontal. This would imply a similar molecular weight for methylated and non-methylated polymers. This was further evidenced by graphic representations of the intensity decay of the 1H signals of aromatic H-2″ and H-1 at δ 7.16 and 5.24 and that of the methoxy group at δ 3.85. These three signals essentially showed the same curve shape. According to results of in vitro and in vivo experiments PDPGA of S.asperum and S.caucasicum could be considered as potential anti-inflammatory, wound healing and anti-cancer therapeutic agent.

Keywords: caffeic acid-derived polyether, poly[3-(3, 4-dihydroxyphenyl)glyceric acid], poly[oxy-1-carboxy-2-(3, 4-dihydroxyphenyl)ethylene], symphytum, anchusa

Procedia PDF Downloads 403
1230 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.

Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.

Procedia PDF Downloads 467
1229 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 269
1228 On Virtual Coordination Protocol towards 5G Interference Mitigation: Modelling and Performance Analysis

Authors: Bohli Afef

Abstract:

The fifth-generation (5G) wireless systems is featured by extreme densities of cell stations to overcome the higher future demand. Hence, interference management is a crucial challenge in 5G ultra-dense cellular networks. In contrast to the classical inter-cell interference coordination approach, which is no longer fit for the high density of cell-tiers, this paper proposes a novel virtual coordination based on the dynamic common cognitive monitor channel protocol to deal with the inter-cell interference issue. A tractable and flexible model for the coverage probability of a typical user is developed through the use of the stochastic geometry model. The analyses of the performance of the suggested protocol are illustrated both analytically and numerically in terms of coverage probability.

Keywords: ultra dense heterogeneous networks, dynamic common channel protocol, cognitive radio, stochastic geometry, coverage probability

Procedia PDF Downloads 325