Search results for: stochastic decomposition.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 588

Search results for: stochastic decomposition.

318 DC Bus Voltage Regulator for Renewable Energy Based Microgrid Application

Authors: Bakari M. M. Mwinyiwiwa

Abstract:

Renewable Energy based microgrids are being considered to provide electricity for the expanding energy demand in the grid distribution network and grid isolated areas. The technical challenges associated with the operation and controls are immense. Electricity generation by Renewable Energy Sources is of stochastic nature such that there is a demand for regulation of voltage output in order to satisfy the standard loads’ requirements. In a renewable energy based microgrid, the energy sources give stochastically variable magnitude AC or DC voltages. AC voltage regulation of micro and mini sources pose practical challenges as well as unbearable costs. It is therefore practically and economically viable to convert the voltage outputs from stochastic AC and DC voltage sources to constant DC voltage to satisfy various DC loads including inverters which ultimately feed AC loads. This paper presents results obtained from SEPIC converter based DC bus voltage regulator as a case study for renewable energy microgrid application. Real-Time Simulation results show that upon appropriate choice of controller parameters for control of the SEPIC converter, the output DC bus voltage can be kept constant regardless of wide range of voltage variations of the source. This feature is particularly important in the situation that multiple renewable sources are to be integrated to supply a microgrid under main grid integration or isolated modes of operation.

Keywords: DC Voltage Regulator, microgrid, multisource, Renewable Energy, SEPIC Converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4264
317 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: River stage-discharge process, LSSVM, discrete wavelet transform (DWT), ensemble empirical decomposition mode (EEMD), multi-station modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606
316 Teager-Huang Analysis Applied to Sonar Target Recognition

Authors: J.-C. Cexus, A.O. Boudraa

Abstract:

In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.

Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
315 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

Authors: Abdulnasir Hossen, Ulrich Heute

Abstract:

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.

Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
314 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy

Abstract:

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.

Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
313 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Authors: Kazunori Nomura, Hiromichi Ogi, Masaumi Nakahara, Sou Watanabe, Atsuhiro Shibata

Abstract:

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

Keywords: Chemical Processing Facility, medium- and long-term management plan of JAEA Facilities, STRAD project, treatment of radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
312 Poultry Manure and Its Derived Biochar as a Soil Amendment for Newly Reclaimed Sandy Soils under Arid and Semi-Arid Conditions

Authors: W. S. Mohamed, A. A. Hammam

Abstract:

Sandy soils under arid and semi-arid conditions are characterized by poor physical and biochemical properties such as low water retention, rapid organic matter decomposition, low nutrients use efficiency, and limited crop productivity. Addition of organic amendments is crucial to develop soil properties and consequently enhance nutrients use efficiency and lessen organic carbon decomposition. Two years field experiments were developed to investigate the feasibility of using poultry manure and its derived biochar integrated with different levels of N fertilizer as a soil amendment for newly reclaimed sandy soils in Western Desert of El-Minia Governorate, Egypt. Results of this research revealed that poultry manure and its derived biochar addition induced pronounced effects on soil moisture content at saturation point, field capacity (FC) and consequently available water. Data showed that application of poultry manure (PM) or PM-derived biochar (PMB) in combination with inorganic N levels had caused significant changes on a range of the investigated sandy soil biochemical properties including pH, EC, mineral N, dissolved organic carbon (DOC), dissolved organic N (DON) and quotient DOC/DON. Overall, the impact of PMB on soil physical properties was detected to be superior than the impact of PM, regardless the inorganic N levels. In addition, the obtained results showed that PM and PM application had the capacity to stimulate vigorous growth, nutritional status, production levels of wheat and sorghum, and to increase soil organic matter content and N uptake and recovery compared to control. By contrast, comparing between PM and PMB at different levels of inorganic N, the obtained results showed higher relative increases in both grain and straw yields of wheat in plots treated with PM than in those treated with PMB. The interesting feature of this research is that the biochar derived from PM increased treated sandy soil organic carbon (SOC) 1.75 times more than soil treated with PM itself at the end of cropping seasons albeit double-applied amount of PM. This was attributed to the higher carbon stability of biochar treated sandy soils increasing soil persistence for carbon decomposition in comparison with PM labile carbon. It could be concluded that organic manures applied to sandy soils under arid and semi-arid conditions are subjected to high decomposition and mineralization rates through crop seasons. Biochar derived from organic wastes considers as a source of stable carbon and could be very hopeful choice for substituting easily decomposable organic manures under arid conditions. Therefore, sustainable agriculture and productivity in newly reclaimed sandy soils desire one high rate addition of biochar derived from organic manures instead of frequent addition of such organic amendments.

Keywords: Biochar, dissolved organic carbon, N-uptake, poultry, sandy soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
311 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
310 Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

Authors: Hazem M. El-Bakry

Abstract:

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

Keywords: Fast Painting, Cross Correlation, Frequency Domain, Parallel Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
309 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
308 Two Different Computing Methods of the Smith Arithmetic Determinant

Authors: Xing-Jian Li, Shen Qu

Abstract:

The Smith arithmetic determinant is investigated in this paper. By using two different methods, we derive the explicit formula for the Smith arithmetic determinant.

Keywords: Elementary row transformation, Euler function, Matrix decomposition, Smith arithmetic determinant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2565
307 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model

Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa

Abstract:

In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.

Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717
306 Effect of Supplementary Premium on the Optimal Portfolio Policy in a Defined Contribution Pension Scheme with Refund of Premium Clauses

Authors: Edikan E. Akpanibah Obinichi C. Mandah Imoleayo S. Asiwaju

Abstract:

In this paper, we studied the effect of supplementary premium on the optimal portfolio policy in a defined contribution (DC) pension scheme with refund of premium clauses. This refund clause allows death members’ next of kin to withdraw their relative’s accumulated wealth during the accumulation period. The supplementary premium is to help sustain the scheme and is assumed to be stochastic. We considered cases when the remaining wealth is equally distributed and when it is not equally distributed among the remaining members. Next, we considered investments in cash and equity to help increase the remaining accumulated funds to meet up with the retirement needs of the remaining members and composed the problem as a continuous time mean-variance stochastic optimal control problem using the actuarial symbol and established an optimization problem from the extended Hamilton Jacobi Bellman equations. The optimal portfolio policy, the corresponding optimal fund size for the two assets and also the efficient frontier of the pension members for the two cases was obtained. Furthermore, the numerical simulations of the optimal portfolio policies with time were presented and the effect of the supplementary premium on the optimal portfolio policy was discussed and observed that the supplementary premium decreases the optimal portfolio policy of the risky asset (equity). Secondly we observed a disparity between the optimal policies for the two cases.

Keywords: Defined contribution pension scheme, extended Hamilton Jacobi Bellman equations, optimal portfolio policies, refund of premium clauses, supplementary premium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
305 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 950
304 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel

Authors: J. Daba, J. Dubois

Abstract:

In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.

Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
303 Catalytic Decomposition of Potassium Monopersulfate. The Kinetics

Authors: Olga Gimeno, Javier Rivas, Maria Carbajo, Teresa Borralho

Abstract:

Potassium monopersulfate has been decomposed in aqueous solution in the presence of Co(II). The process has been simulated by means of a mechanism based on elementary reactions. Rate constants have been taken from literature reports or, alternatively, assimilated to analogous reactions occurring in Fenton's chemistry. Several operating conditions have been successfully applied.

Keywords: Monopersulfate, Oxone®, Sulfate radicals, Water treatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
302 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: Analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 602
301 Image Segmentation Using Suprathreshold Stochastic Resonance

Authors: Rajib Kumar Jha, P.K.Biswas, B.N.Chatterji

Abstract:

In this paper a new concept of partial complement of a graph G is introduced and using the same a new graph parameter, called completion number of a graph G, denoted by c(G) is defined. Some basic properties of graph parameter, completion number, are studied and upperbounds for completion number of classes of graphs are obtained , the paper includes the characterization also.

Keywords: Completion Number, Maximum Independent subset, Partial complements, Partial self complementary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1191
300 Analysis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

For optimal unbiased filter as mean-square and in the case of functioning anomalous noises in the observation memory channel, we have proved insensitivity of filter to inaccurate knowledge of the anomalous noise intensity matrix and its equivalence to truncated filter plotted only by non anomalous components of an observation vector.

Keywords: Mathematical expectation, filtration, anomalous noise, memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
299 Revealing Nonlinear Couplings between Oscillators from Time Series

Authors: B.P. Bezruchko, D.A. Smirnov

Abstract:

Quantitative characterization of nonlinear directional couplings between stochastic oscillators from data is considered. We suggest coupling characteristics readily interpreted from a physical viewpoint and their estimators. An expression for a statistical significance level is derived analytically that allows reliable coupling detection from a relatively short time series. Performance of the technique is demonstrated in numerical experiments.

Keywords: Nonlinear time series analysis, directional couplings, coupled oscillators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1221
298 Simulation of Sample Paths of Non Gaussian Stationary Random Fields

Authors: Fabrice Poirion, Benedicte Puig

Abstract:

Mathematical justifications are given for a simulation technique of multivariate nonGaussian random processes and fields based on Rosenblatt-s transformation of Gaussian processes. Different types of convergences are given for the approaching sequence. Moreover an original numerical method is proposed in order to solve the functional equation yielding the underlying Gaussian process autocorrelation function.

Keywords: Simulation, nonGaussian, random field, multivariate, stochastic process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
297 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
296 Compression and Filtering of Random Signals under Constraint of Variable Memory

Authors: Anatoli Torokhti, Stan Miklavcic

Abstract:

We study a new technique for optimal data compression subject to conditions of causality and different types of memory. The technique is based on the assumption that some information about compressed data can be obtained from a solution of the associated problem without constraints of causality and memory. This allows us to consider two separate problem related to compression and decompression subject to those constraints. Their solutions are given and the analysis of the associated errors is provided.

Keywords: stochastic signals, optimization problems in signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1297
295 Efficient Study of Substrate Integrated Waveguide Devices

Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand

Abstract:

This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.

Keywords: Convergence study, HFSS, Modal decomposition, SIW Circuits, WCIP Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
294 A New Definition of the Intrinsic Mode Function

Authors: Zhihua Yang, Lihua Yang

Abstract:

This paper makes a detailed analysis regarding the definition of the intrinsic mode function and proves that Condition 1 of the intrinsic mode function can really be deduced from Condition 2. Finally, an improved definition of the intrinsic mode function is given.

Keywords: Empirical Mode Decomposition (EMD), Hilbert-Huang transform(HHT), Intrinsic Mode Function(IMF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3530
293 Javanese Character Recognition Using Hidden Markov Model

Authors: Anastasia Rita Widiarti, Phalita Nari Wastu

Abstract:

Hidden Markov Model (HMM) is a stochastic method which has been used in various signal processing and character recognition. This study proposes to use HMM to recognize Javanese characters from a number of different handwritings, whereby HMM is used to optimize the number of state and feature extraction. An 85.7 % accuracy is obtained as the best result in 16-stated vertical model using pure HMM. This initial result is satisfactory for prompting further research.

Keywords: Character recognition, off-line handwritingrecognition, Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943
292 Analytical and Numerical Approaches in Coagulation of Particles

Authors: Bilal Barakeh

Abstract:

In this paper we discuss the effect of unbounded particle interaction operator on particle growth and we study how this can address the choice of appropriate time steps of the numerical simulation. We provide also rigorous mathematical proofs showing that large particles become dominating with increasing time while small particles contribute negligibly. Second, we discuss the efficiency of the algorithm by performing numerical simulations tests and by comparing the simulated solutions with some known analytic solutions to the Smoluchowski equation.

Keywords: Stochastic processes, coagulation of particles, numerical scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
291 Statistical Characteristics of Distribution of Radiation-Induced Defects under Random Generation

Authors: Pavlo Selyshchev

Abstract:

We consider fluctuations of defects density taking into account their interaction. Stochastic field of displacement generation rate gives random defect distribution. We determinate statistical characteristics (mean and dispersion) of random field of point defect distribution as function of defect generation parameters, temperature and properties of irradiated crystal.

 

Keywords: Irradiation, Primary Defects, Interaction, Fluctuations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
290 Light Harvesting Titanium Nanocatalyst for Remediation of Methyl Orange

Authors: Brajesh Kumar, Luis Cumbal

Abstract:

An ecofriendly Citrus paradisipeel extract mediated synthesis of TiO2 nanoparticles is reported under sonication. U.V.-vis, Transmission electron microscopy, Dynamic light scattering, and X-ray analyses are performed to characterize the formation of TiO2 nanoparticles. It is almost spherical in shape, having a size of 60–140 nm and the XRD peaks at 2θ = 25.363° confirm the characteristic facets for anatase form. The synthesized nanocatalyst is highly active in the decomposition of methyl orange (64 mg/L) in sunlight (~73%) for 2.5h.

Keywords: Ecofriendly, TiO2 nanoparticles, Citrusparadisi, TEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2763
289 Effect of Urea Deep Placement Technology Adoption on the Production Frontier: Evidence from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, William Adzawla

Abstract:

Rice is an important staple crop, with current demand higher than the domestic supply in Ghana. This has led to a high and unfavourable import bill. Therefore, recent policies and interventions in the agricultural sub-sector aim at promoting various improved agricultural technologies in order to improve domestic production and reduce the importation of rice. In this study, we examined the effect of the adoption of Urea Deep Placement (UDP) technology by rice farmers on the position of the production frontier. This involved 200 farmers selected through a multi stage sampling technique in the Northern region of Ghana. A Cobb-Douglas stochastic frontier model was fitted. The result showed that the adoption of UDP technology shifts the output frontier outward and also move the farmers closer to the frontier. Farmers were also operating under diminishing returns to scale which calls for redress. Other factors that significantly influenced rice production were farm size, labour, use of certified seeds and NPK fertilizer. Although there was an opportunity for improvement, the farmers were highly efficient (92%), compared to previous studies. Farmers’ efficiency was improved through increased education, household size, experience, access to credit, and lack of extension service provision by MoFA. The study recommends the revision of Ghana’s agricultural policy to include the UDP technology. Agricultural Extension officers of the Ministry of Food and Agriculture (MoFA) should be trained on the UDP technology to support IFDC’s drive to improve adoption by rice farmers. Rice farmers are also encouraged to expand their farm lands, improve plant population, and also increase the usage of fertilizer to improve yields. Mechanisms through which credit can be made easily accessible and effectively utilised should be identified and promoted.

Keywords: Efficiency, rice farmers, stochastic frontier, UDP technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895