Search results for: Least square estimation
492 Restarted Generalized Second-Order Krylov Subspace Methods for Solving Quadratic Eigenvalue Problems
Authors: Liping Zhou, Liang Bao, Yiqin Lin, Yimin Wei, Qinghua Wu
Abstract:
This article is devoted to the numerical solution of large-scale quadratic eigenvalue problems. Such problems arise in a wide variety of applications, such as the dynamic analysis of structural mechanical systems, acoustic systems, fluid mechanics, and signal processing. We first introduce a generalized second-order Krylov subspace based on a pair of square matrices and two initial vectors and present a generalized second-order Arnoldi process for constructing an orthonormal basis of the generalized second-order Krylov subspace. Then, by using the projection technique and the refined projection technique, we propose a restarted generalized second-order Arnoldi method and a restarted refined generalized second-order Arnoldi method for computing some eigenpairs of largescale quadratic eigenvalue problems. Some theoretical results are also presented. Some numerical examples are presented to illustrate the effectiveness of the proposed methods.Keywords: Quadratic eigenvalue problem, Generalized secondorder Krylov subspace, Generalized second-order Arnoldi process, Projection technique, Refined technique, Restarting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867491 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine
Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour
Abstract:
Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.
Keywords: Intrusion detection system, decision tree, support vector machine, feature selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1239490 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging
Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig
Abstract:
A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.
Keywords: Clogging, nozzle, numerical model, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841489 Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification
Authors: F.Alilat, S.Loumi, H.Merrad, B.Sansal
Abstract:
In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.
Keywords: Neural Networks, fuzzy ART, fuzzy ARTMAP, Remote sensing, multispectral Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364488 Predicting Global Solar Radiation Using Recurrent Neural Networks and Climatological Parameters
Authors: Rami El-Hajj Mohamad, Mahmoud Skafi, Ali Massoud Haidar
Abstract:
Several meteorological parameters were used for the prediction of monthly average daily global solar radiation on horizontal using recurrent neural networks (RNNs). Climatological data and measures, mainly air temperature, humidity, sunshine duration, and wind speed between 1995 and 2007 were used to design and validate a feed forward and recurrent neural network based prediction systems. In this paper we present our reference system based on a feed-forward multilayer perceptron (MLP) as well as the proposed approach based on an RNN model. The obtained results were promising and comparable to those obtained by other existing empirical and neural models. The experimental results showed the advantage of RNNs over simple MLPs when we deal with time series solar radiation predictions based on daily climatological data.
Keywords: Recurrent Neural Networks, Global Solar Radiation, Multi-layer perceptron, gradient, Root Mean Square Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2561487 Long-Range Dependence of Financial Time Series Data
Authors: Chatchai Pesee
Abstract:
This paper examines long-range dependence or longmemory of financial time series on the exchange rate data by the fractional Brownian motion (fBm). The principle of spectral density function in Section 2 is used to find the range of Hurst parameter (H) of the fBm. If 0< H <1/2, then it has a short-range dependence (SRD). It simulates long-memory or long-range dependence (LRD) if 1/2< H <1. The curve of exchange rate data is fBm because of the specific appearance of the Hurst parameter (H). Furthermore, some of the definitions of the fBm, long-range dependence and selfsimilarity are reviewed in Section II as well. Our results indicate that there exists a long-memory or a long-range dependence (LRD) for the exchange rate data in section III. Long-range dependence of the exchange rate data and estimation of the Hurst parameter (H) are discussed in Section IV, while a conclusion is discussed in Section V.Keywords: Fractional Brownian motion, long-rangedependence, memory, short-range dependence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884486 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis
Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior
Abstract:
Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyze several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.
Keywords: Drying, models, jackfruit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2421485 An Attempt to Predict the Performances of a Rocket Thrust Chamber
Authors: A. Benarous, D. Karmed, R. Haoui, A. Liazid
Abstract:
The process for predicting the ballistic properties of a liquid rocket engine is based on the quantitative estimation of idealized performance deviations. In this aim, an equilibrium chemistry procedure is firstly developed and implemented in a Fortran routine. The thermodynamic formulation allows for the calculation of the theoretical performances of a rocket thrust chamber. In a second step, a computational fluid dynamic analysis of the turbulent reactive flow within the chamber is performed using a finite volume approach. The obtained values for the “quasi-real" performances account for both turbulent mixing and chemistryturbulence coupling. In the present work, emphasis is made on the combustion efficiency performance for which deviation is mainly due to radial gradients of static temperature and mixture ratio. Numerical values of the characteristic velocity are successfully compared with results from an industry-used code. The results are also confronted with the experimental data of a laboratory-scale rocket engine.
Keywords: JANAF methodology, Liquid rocket engine, Mascotte test-rig, Theoretical performances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2043484 Contribution of On-Site and Off-Site Processes to Greenhouse Gas (GHG) Emissions by Wastewater Treatment Plants
Authors: Laleh Yerushalmi, Fariborz Haghighat, Maziar Bani Shahabadi
Abstract:
The estimation of overall on-site and off-site greenhouse gas (GHG) emissions by wastewater treatment plants revealed that in anaerobic and hybrid treatment systems greater emissions result from off-site processes compared to on-site processes. However, in aerobic treatment systems, onsite processes make a higher contribution to the overall GHG emissions. The total GHG emissions were estimated to be 1.6, 3.3 and 3.8 kg CO2-e/kg BOD in the aerobic, anaerobic and hybrid treatment systems, respectively. In the aerobic treatment system without the recovery and use of the generated biogas, the off-site GHG emissions were 0.65 kg CO2-e/kg BOD, accounting for 40.2% of the overall GHG emissions. This value changed to 2.3 and 2.6 kg CO2-e/kg BOD, and accounted for 69.9% and 68.1% of the overall GHG emissions in the anaerobic and hybrid treatment systems, respectively. The increased off-site GHG emissions in the anaerobic and hybrid treatment systems are mainly due to material usage and energy demand in these systems. The anaerobic digester can contribute up to 100%, 55% and 60% of the overall energy needs of plants in the aerobic, anaerobic and hybrid treatment systems, respectively.
Keywords: On-site and off-site greenhouse gas (GHG)emissions, wastewater treatment plants, biogas recovery
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166483 Evaluating Hourly Sulphur Dioxide and Ground Ozone Simulated with the Air Quality Model in Lima, Peru
Authors: Odón R. Sánchez-Ccoyllo, Elizabeth Ayma-Choque, Alan Llacza
Abstract:
Sulphur dioxide (SO₂) and surface-ozone (O₃) concentrations are associated with diseases. The objective of this research is to evaluate the effectiveness of the air-quality Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) model with a horizontal resolution of 5 km x 5 km. For this purpose, the measurements of the hourly SO₂ and O₃ concentrations available in three air quality monitoring stations in Lima, Peru were used for the purpose of validating the simulations of the SO₂ and O₃ concentrations obtained with the WRF-Chem model in February 2018. For the quantitative evaluation of the simulations of these gases, statistical techniques were implemented, such as the average of the simulations; the average of the measurements; the Mean Bias (MeB); the Mean Error (MeE); and the Root Mean Square Error (RMSE). The results of these statistical metrics indicated that the simulated SO₂ and O₃ values over-predicted the SO₂ and O₃ measurements. For the SO₂ concentration, the MeB values varied from 0.58 to 26.35 µg/m³; the MeE values varied from 8.75 to 26.5 µg/m³; the RMSE values varied from 13.3 to 31.79 µg/m³; while for O₃ concentrations the statistical values of the MeB varied from 37.52 to 56.29 µg/m³; the MeE values varied from 37.54 to 56.70 µg/m³; the RMSE values varied from 43.05 to 69.56 µg/m³.
Keywords: Ground-ozone, Lima, Sulphur dioxide, WRF-Chem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364482 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals
Authors: Michal Natora, Felix Franke, Klaus Obermayer
Abstract:
Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.
Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523481 Semi-automatic Background Detection in Microscopic Images
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Ludovico Carozza, Filippo Piccinini
Abstract:
The last years have seen an increasing use of image analysis techniques in the field of biomedical imaging, in particular in microscopic imaging. The basic step for most of the image analysis techniques relies on a background image free of objects of interest, whether they are cells or histological samples, to perform further analysis, such as segmentation or mosaicing. Commonly, this image consists of an empty field acquired in advance. However, many times achieving an empty field could not be feasible. Or else, this could be different from the background region of the sample really being studied, because of the interaction with the organic matter. At last, it could be expensive, for instance in case of live cell analyses. We propose a non parametric and general purpose approach where the background is built automatically stemming from a sequence of images containing even objects of interest. The amount of area, in each image, free of objects just affects the overall speed to obtain the background. Experiments with different kinds of microscopic images prove the effectiveness of our approach.
Keywords: Microscopy, flat field correction, background estimation, image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835480 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781479 Combining Color and Layout Features for the Identification of Low-resolution Documents
Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold
Abstract:
This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.Keywords: Document color modeling, document visual signature, kernel density estimation, document identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375478 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770477 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.
Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884476 Spatio-Temporal Analysis and Mapping of Malaria in Thailand
Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit
Abstract:
This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.
Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147475 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988474 Deployment of a Biocompatible International Space Station into Geostationary Orbit
Authors: Tim Falk, Chris Chatwin
Abstract:
This study explores the possibility of a space station that will occupy a geostationary equatorial orbit (GEO) and create artificial gravity using centripetal acceleration. The concept of the station is to create a habitable, safe environment that can increase the possibility of space tourism by reducing the wide variation of hazards associated with space exploration. The ability to control the intensity of artificial gravity through Hall-effect thrusters will allow experiments to be carried out at different levels of artificial gravity. A feasible prototype model was built to convey the concept and to enable cost estimation. The SpaceX Falcon Heavy rocket with a 26,700 kg payload to GEO was selected to take the 675 tonne spacecraft into orbit; space station construction will require up to 30 launches, this would be reduced to 5 launches when the SpaceX BFR becomes available. The estimated total cost of implementing the Sussex Biocompatible International Space Station (BISS) is approximately $47.039 billion, which is very attractive when compared to the cost of the International Space Station, which cost $150 billion.
Keywords: Artificial gravity, biocompatible, geostationary orbit, space station.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 566473 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.
Keywords: Generalized autoregressive score model, stock returns, time-varying.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034472 The Estimation of Human Vital Signs Complexity
Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius
Abstract:
Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.
Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247471 Nodal Load Profiles Estimation for Time Series Load Flow Using Independent Component Analysis
Authors: Mashitah Mohd Hussain, Salleh Serwan, Zuhaina Hj Zakaria
Abstract:
This paper presents a method to estimate load profile in a multiple power flow solutions for every minutes in 24 hours per day. A method to calculate multiple solutions of non linear profile is introduced. The Power System Simulation/Engineering (PSS®E) and python has been used to solve the load power flow. The result of this power flow solutions has been used to estimate the load profiles for each load at buses using Independent Component Analysis (ICA) without any knowledge of parameter and network topology of the systems. The proposed algorithm is tested with IEEE 69 test bus system represents for distribution part and the method of ICA has been programmed in MATLAB R2012b version. Simulation results and errors of estimations are discussed in this paper.Keywords: Electrical Distribution System, Power Flow Solution, Distribution Network, Independent Component Analysis, Newton Raphson, Power System Simulation for Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2916470 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain
Authors: Harpreet Kaur, Sudipta Majumdar
Abstract:
In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images. As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.
Keywords: Dark channel prior, image dehazing, lifting wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123469 Comparison of Hough Transform and Mean Shift Algorithm for Estimation of the Orientation Angle of Industrial Data Matrix Codes
Authors: Ion-Cosmin Dita, Vasile Gui, Franz Quint, Marius Otesteanu
Abstract:
In automatic manufacturing and assembling of mechanical, electrical and electronic parts one needs to reliably identify the position of components and to extract the information of these components. Data Matrix Codes (DMC) are established by these days in many areas of industrial manufacturing thanks to their concentration of information on small spaces. In today’s usually order-related industry, where increased tracing requirements prevail, they offer further advantages over other identification systems. This underlines in an impressive way the necessity of a robust code reading system for detecting DMC on the components in factories. This paper compares two methods for estimating the angle of orientation of Data Matrix Codes: one method based on the Hough Transform and the other based on the Mean Shift Algorithm. We concentrate on Data Matrix Codes in industrial environment, punched, milled, lasered or etched on different materials in arbitrary orientation.
Keywords: Industrial data matrix code, Hough transform, mean shift.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336468 Fiber Optic Sensors for Hydrogen Peroxide Vapor Measurement
Authors: H. Akbari Khorami, P. Wild, N. Djilali
Abstract:
This paper reports on the response of a fiber-optic sensing probe to small concentrations of hydrogen peroxide (H2O2) vapor at room temperature. H2O2 has extensive applications in industrial and medical environments. Conversely, H2O2 can be a health hazard by itself. For example, H2O2 induces cellular damage in human cells and its presence can be used to diagnose illnesses such as asthma and human breast cancer. Hence, development of reliable H2O2 sensor is of vital importance to detect and measure this species. Ferric ferrocyanide, referred to as Prussian Blue (PB), was deposited on the tip of a multimode optical fiber through the single source precursor technique and served as an indicator of H2O2 in a spectroscopic manner. Sensing tests were performed in H2O2-H2O vapor mixtures with different concentrations of H2O2. The results of sensing tests show the sensor is able to detect H2O2 concentrations in the range of 50.6 ppm to 229.5 ppm. Furthermore, the sensor response to H2O2 concentrations is linear in a log-log scale with the adjacent R-square of 0.93. This sensing behavior allows us to detect and quantify the concentration of H2O2 in the vapor phase.Keywords: Chemical deposition, fiber-optic sensors, hydrogen peroxide vapor, prussian blue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940467 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise
Authors: K. Kamalanand, P. Mannar Jawahar
Abstract:
Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988466 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378465 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon
Authors: Haniye Dehestani, Yadollah Ordokhani
Abstract:
In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022464 Effect of the Cross-Sectional Geometry on Heat Transfer and Particle Motion of Circulating Fluidized Bed Riser for CO2 Capture
Authors: Seungyeong Choi, Namkyu Lee, Dong Il Shim, Young Mun Lee, Yong-Ki Park, Hyung Hee Cho
Abstract:
Effect of the cross-sectional geometry on heat transfer and particle motion of circulating fluidized bed riser for CO2 capture was investigated. Numerical simulation using Eulerian-eulerian method with kinetic theory of granular flow was adopted to analyze gas-solid flow consisting in circulating fluidized bed riser. Circular, square, and rectangular cross-sectional geometry cases of the same area were carried out. Rectangular cross-sectional geometries were analyzed having aspect ratios of 1: 2, 1: 4, 1: 8, and 1:16. The cross-sectional geometry significantly influenced the particle motion and heat transfer. The downward flow pattern of solid particles near the wall was changed. The gas-solid mixing degree of the riser with the rectangular cross section of the high aspect ratio was the lowest. There were differences in bed-to-wall heat transfer coefficient according to rectangular geometry with different aspect ratios.
Keywords: Bed geometry, computational fluid dynamics, circulating fluidized bed riser, heat transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331463 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel
Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi
Abstract:
The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772