Search results for: Statistical method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8871

Search results for: Statistical method

8661 A Family of Improved Secant-Like Method with Super-Linear Convergence

Authors: Liang Chen

Abstract:

A family of improved secant-like method is proposed in this paper. Further, the analysis of the convergence shows that this method has super-linear convergence. Efficiency are demonstrated by numerical experiments when the choice of α is correct.

Keywords: Nonlinear equations, Secant method, Convergence order, Secant-like method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
8660 Statistical Assessment of Models for Determination of Soil – Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and timeconsuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: Soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2611
8659 New Newton's Method with Third-order Convergence for Solving Nonlinear Equations

Authors: Osama Yusuf Ababneh

Abstract:

For the last years, the variants of the Newton-s method with cubic convergence have become popular iterative methods to find approximate solutions to the roots of non-linear equations. These methods both enjoy cubic convergence at simple roots and do not require the evaluation of second order derivatives. In this paper, we present a new Newton-s method based on contra harmonic mean with cubically convergent. Numerical examples show that the new method can compete with the classical Newton's method.

Keywords: Third-order convergence, non-linear equations, root finding, iterative method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2911
8658 Improved IDR(s) Method for Gaining Very Accurate Solutions

Authors: Yusuke Onoue, Seiji Fujino, Norimasa Nakashima

Abstract:

The IDR(s) method based on an extended IDR theorem was proposed by Sonneveld and van Gijzen. The original IDR(s) method has excellent property compared with the conventional iterative methods in terms of efficiency and small amount of memory. IDR(s) method, however, has unexpected property that relative residual 2-norm stagnates at the level of less than 10-12. In this paper, an effective strategy for stagnation detection, stagnation avoidance using adaptively information of parameter s and improvement of convergence rate itself of IDR(s) method are proposed in order to gain high accuracy of the approximated solution of IDR(s) method. Through numerical experiments, effectiveness of adaptive tuning IDR(s) method is verified and demonstrated.

Keywords: Krylov subspace methods, IDR(s), adaptive tuning, stagnation of relative residual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
8657 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
8656 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data

Keywords: Rule induction, decision table, missing data, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
8655 Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea

Authors: Santanu Chattopadhyay, Gautam Sarkar, Arabinda Das

Abstract:

This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification.

Keywords: Arrhythmia, congestive heart failure, discrete wavelet transform, electrocardiogram, myocardial ischemia, sleep apnea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
8654 Denosing ECG using Translation Invariant Multiwavelet

Authors: Jeong Yup Han, Su Kyung Lee, Hong Bae Park

Abstract:

In this paper, we propose a method to reduce the various kinds of noise while gathering and recording the electrocardiogram (ECG) signal. Because of the defects of former method in the noise elimination of ECG signal, we use translation invariant (TI) multiwavelet denoising method to the noise elimination. The advantage of the proposed method is that it may not only remain the geometrical characteristics of the original ECG signal and keep the amplitudes of various ECG waveforms efficiently, but also suppress impulsive noise to some extent. The simulation results indicate that the proposed method are better than former removing noise method in aspects of remaining geometrical characteristics of ECG signal and the signal-to-noise ratio (SNR).

Keywords: ECG, TI multiwavelet, denoise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
8653 Direct Method for Converting FIR Filter with Low Nonzero Tap into IIR Filter

Authors: Jeong Hye Moon, Byung Hoon Kang, PooGyeon Park

Abstract:

In this paper, we proposed the direct method for converting Finite-Impulse Response (FIR) filter with low nonzero tap into Infinite-Impulse Response (IIR) filter using the pre-determined table. The prony method is used by ghost cancellator which is IIR approximation to FIR filter which is better performance than IIR and have much larger calculation difference. The direct method for many ghost combination with low nonzero tap of NTSC(National Television System Committee) TV signal in Korea is described. The proposed method is illustrated with an example.

Keywords: NTSC, Ghost cancellation, FIR, IIR, Prony method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3099
8652 Wavelet Based Identification of Second Order Linear System

Authors: Sudipta Majumdar, Harish Parthasarathy

Abstract:

In this paper, a wavelet based method is proposed to identify the constant coefficients of a second order linear system and is compared with the least squares method. The proposed method shows improved accuracy of parameter estimation as compared to the least squares method. Additionally, it has the advantage of smaller data requirement and storage requirement as compared to the least squares method.

Keywords: Least squares method, linear system, system identification, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
8651 The Statistical Properties of Filtered Signals

Authors: Ephraim Gower, Thato Tsalaile, Monageng Kgwadi, Malcolm Hawksford.

Abstract:

In this paper, the statistical properties of filtered or convolved signals are considered by deriving the resulting density functions as well as the exact mean and variance expressions given a prior knowledge about the statistics of the individual signals in the filtering or convolution process. It is shown that the density function after linear convolution is a mixture density, where the number of density components is equal to the number of observations of the shortest signal. For circular convolution, the observed samples are characterized by a single density function, which is a sum of products.

Keywords: Circular Convolution, linear Convolution, mixture density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
8650 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: Business intelligence, business intelligence capability, decision making, decision quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308
8649 Note to the Global GMRES for Solving the Matrix Equation AXB = F

Authors: Fatemeh Panjeh Ali Beik

Abstract:

In the present work, we propose a new projection method for solving the matrix equation AXB = F. For implementing our new method, generalized forms of block Krylov subspace and global Arnoldi process are presented. The new method can be considered as an extended form of the well-known global generalized minimum residual (Gl-GMRES) method for solving multiple linear systems and it will be called as the extended Gl-GMRES (EGl- GMRES). Some new theoretical results have been established for proposed method by employing Schur complement. Finally, some numerical results are given to illustrate the efficiency of our new method.

Keywords: Matrix equation, Iterative method, linear systems, block Krylov subspace method, global generalized minimum residual (Gl-GMRES).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
8648 Irrigation Water Quality Evaluation Based on Multivariate Statistical Analysis: A Case Study of Jiaokou Irrigation District

Authors: Panpan Xu, Qiying Zhang, Hui Qian

Abstract:

Groundwater is main source of water supply in the Guanzhong Basin, China. To investigate the quality of groundwater for agricultural purposes in Jiaokou Irrigation District located in the east of the Guanzhong Basin, 141 groundwater samples were collected for analysis of major ions (K+, Na+, Mg2+, Ca2+, SO42-, Cl-, HCO3-, and CO32-), pH, and total dissolved solids (TDS). Sodium percentage (Na%), residual sodium carbonate (RSC), magnesium hazard (MH), and potential salinity (PS) were applied for irrigation water quality assessment. In addition, multivariate statistical techniques were used to identify the underlying hydrogeochemical processes. Results show that the content of TDS mainly depends on Cl-, Na+, Mg2+, and SO42-, and the HCO3- content is generally high except for the eastern sand area. These are responsible for complex hydrogeochemical processes, such as dissolution of carbonate minerals (dolomite and calcite), gypsum, halite, and silicate minerals, the cation exchange, as well as evaporation and concentration. The average evaluation levels of Na%, RSC, MH, and PS for irrigation water quality are doubtful, good, unsuitable, and injurious to unsatisfactory, respectively. Therefore, it is necessary for decision makers to comprehensively consider the indicators and thus reasonably evaluate the irrigation water quality.

Keywords: Irrigation water quality, multivariate statistical analysis, groundwater, hydrogeochemical process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 490
8647 Arabic Character Recognition using Artificial Neural Networks and Statistical Analysis

Authors: Ahmad M. Sarhan, Omar I. Al Helalat

Abstract:

In this paper, an Arabic letter recognition system based on Artificial Neural Networks (ANNs) and statistical analysis for feature extraction is presented. The ANN is trained using the Least Mean Squares (LMS) algorithm. In the proposed system, each typed Arabic letter is represented by a matrix of binary numbers that are used as input to a simple feature extraction system whose output, in addition to the input matrix, are fed to an ANN. Simulation results are provided and show that the proposed system always produces a lower Mean Squared Error (MSE) and higher success rates than the current ANN solutions.

Keywords: ANN, Backpropagation, Gaussian, LMS, MSE, Neuron, standard deviation, Widrow-Hoff rule.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
8646 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1291
8645 Direct Transient Stability Assessment of Stressed Power Systems

Authors: E. Popov, N. Yorino, Y. Zoka, Y. Sasaki, H. Sugihara

Abstract:

This paper discusses the performance of critical trajectory method (CTrj) for power system transient stability analysis under various loading settings and heavy fault condition. The method obtains Controlling Unstable Equilibrium Point (CUEP) which is essential for estimation of power system stability margins. The CUEP is computed by applying the CTrjto the boundary controlling unstable equilibrium point (BCU) method. The Proposed method computes a trajectory on the stability boundary that starts from the exit point and reaches CUEP under certain assumptions. The robustness and effectiveness of the method are demonstrated via six power system models and five loading conditions. As benchmark is used conventional simulation method whereas the performance is compared with and BCU Shadowing method.

Keywords: Power system, Transient stability, Critical trajectory method, Energy function method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
8644 Strategic Investment in Infrastructure Development to Facilitate Economic Growth in the United States

Authors: Arkaprabha Bhattacharyya, Makarand Hastak

Abstract:

The COVID-19 pandemic is unprecedented in terms of its global reach and economic impacts. Historically, investment in infrastructure development projects has been touted to boost the economic growth of a nation. The State and Local governments responsible for delivering infrastructure assets work under tight budgets. Therefore, it is important to understand which infrastructure projects have the highest potential of boosting economic growth in the post-pandemic era. This paper presents relationships between infrastructure projects and economic growth. Statistical relationships between investment in different types of infrastructure projects (transit, water and wastewater, highways, power, manufacturing etc.) and indicators of economic growth are presented using historic data between 2002 and 2020 from the U.S. Census Bureau and U.S. Bureau of Economic Analysis (BEA). The outcome of the paper is the comparison of statistical correlations between investment in different types of infrastructure projects and indicators of economic growth. The comparison of the statistical correlations is useful in ranking the types of infrastructure projects based on their ability to influence economic prosperity. Therefore, investment in the infrastructures with the higher rank will have a better chance of boosting the economic growth. Once, the ranks are derived, they can be used by the decision-makers in infrastructure investment related decision-making process.

Keywords: Economic growth, infrastructure development, infrastructure projects, strategic investment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
8643 A Descent-projection Method for Solving Monotone Structured Variational Inequalities

Authors: Min Sun, Zhenyu Liu

Abstract:

In this paper, a new descent-projection method with a new search direction for monotone structured variational inequalities is proposed. The method is simple, which needs only projections and some function evaluations, so its computational load is very tiny. Under mild conditions on the problem-s data, the method is proved to converges globally. Some preliminary computational results are also reported to illustrate the efficiency of the method.

Keywords: variational inequalities, monotone function, global convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
8642 Statistical Properties and Performance of Ecological Indices Based On Relative Abundances

Authors: Gebriel M. Shamia

Abstract:

The Improved Generalized Diversity Index (IGDI) has been proposed as a tool that can be used to identify areas that have high conservation value and measure the ecological condition of an area. IGDI is based on the species relative abundances. This paper is concerned with particular attention is given to comparisons involving the MacArthur model of species abundances. The properties and performance of various species indices were assessed. Both IGDI and species richness increased with sampling area according to a power function. IGDI were also found to be acceptable ecological indicators of conditions and consistently outperformed coefficient of conservatism indices.

Keywords: Statistical ecology, MacArthur model, Functional Diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
8641 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates Connected and Autonomous Vehicles (CAVs) fuel consumption and air pollutants including Carbon Monoxide (CO), Particulate Matter (PM), and Nitrogen Oxides (NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: Connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 365
8640 Prediction Modeling of Compression Properties of a Knitted Sportswear Fabric Using Response Surface Method

Authors: Jawairia Umar, Tanveer Hussain, Zulfiqar Ali, Muhammad Maqsood

Abstract:

Different knitted structures and knitted parameters play a vital role in the stretch and recovery management of compression sportswear in addition to the materials use to generate this stretch and recovery behavior of the fabric. The present work was planned to predict the different performance indicators of a compression sportswear fabric with some ground parameters i.e. base yarn stitch length (polyester as base yarn and spandex as plating yarn involve to make a compression fabric) and linear density of the spandex which is a key material of any sportswear fabric. The prediction models were generated by response surface method for performance indicators such as stretch & recovery percentage, compression generated by the garment on body, total elongation on application of high power force and load generated on certain percentage extension in fabric. Certain physical properties of the fabric were also modeled using these two parameters.

Keywords: Compression, sportswear, stretch and recovery, statistical model, kikuhime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1992
8639 On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion

Authors: Tijani Delleji, Mourad Zribi, Ahmed Ben Hamida

Abstract:

This paper discusses EM algorithm and Bootstrap approach combination applied for the improvement of the satellite image fusion process. This novel satellite image fusion method based on estimation theory EM algorithm and reinforced by Bootstrap approach was successfully implemented and tested. The sensor images are firstly split by a Bayesian segmentation method to determine a joint region map for the fused image. Then, we use the EM algorithm in conjunction with the Bootstrap approach to develop the bootstrap EM fusion algorithm, hence producing the fused targeted image. We proposed in this research to estimate the statistical parameters from some iterative equations of the EM algorithm relying on a reference of representative Bootstrap samples of images. Sizes of those samples are determined from a new criterion called 'hybrid criterion'. Consequently, the obtained results of our work show that using the Bootstrap EM (BEM) in image fusion improve performances of estimated parameters which involve amelioration of the fused image quality; and reduce the computing time during the fusion process.

Keywords: Satellite image fusion, Bayesian segmentation, Bootstrap approach, EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203
8638 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
8637 A Heuristic Statistical Model for Lifetime Distribution Analysis of Complicated Systems in the Reliability Centered Maintenance

Authors: Mojtaba Mahdavi, Mohamad Mahdavi, Maryam Yazdani

Abstract:

A heuristic conceptual model for to develop the Reliability Centered Maintenance (RCM), especially in preventive strategy, has been explored during this paper. In most real cases which complicity of system obligates high degree of reliability, this model proposes a more appropriate reliability function between life time distribution based and another which is based on relevant Extreme Value (EV) distribution. A statistical and mathematical approach is used to estimate and verify these two distribution functions. Then best one is chosen just among them, whichever is more reliable. A numeric Industrial case study will be reviewed to represent the concepts of this paper, more clearly.

Keywords: Lifetime distribution, Reliability, Estimation, Extreme value, Improving model, Series, Parallel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
8636 Error Propagation in the RK5GL3 Method

Authors: J.S.C. Prentice

Abstract:

The RK5GL3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on a combination of a fifth-order Runge-Kutta method and 3-point Gauss-Legendre quadrature. In this paper we describe the propagation of local errors in this method, and show that the global order of RK5GL3 is expected to be six, one better than the underlying Runge- Kutta method.

Keywords: RK5GL3, RKrGLm, Runge-Kutta, Gauss-Legendre, initial value problem, order, local error, global error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165
8635 Analysis of Web User Identification Methods

Authors: Renáta Iváncsy, Sándor Juhász

Abstract:

Web usage mining has become a popular research area, as a huge amount of data is available online. These data can be used for several purposes, such as web personalization, web structure enhancement, web navigation prediction etc. However, the raw log files are not directly usable; they have to be preprocessed in order to transform them into a suitable format for different data mining tasks. One of the key issues in the preprocessing phase is to identify web users. Identifying users based on web log files is not a straightforward problem, thus various methods have been developed. There are several difficulties that have to be overcome, such as client side caching, changing and shared IP addresses and so on. This paper presents three different methods for identifying web users. Two of them are the most commonly used methods in web log mining systems, whereas the third on is our novel approach that uses a complex cookie-based method to identify web users. Furthermore we also take steps towards identifying the individuals behind the impersonal web users. To demonstrate the efficiency of the new method we developed an implementation called Web Activity Tracking (WAT) system that aims at a more precise distinction of web users based on log data. We present some statistical analysis created by the WAT on real data about the behavior of the Hungarian web users and a comprehensive analysis and comparison of the three methods

Keywords: Data preparation, Tracking individuals, Web useridentification, Web usage mining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4339
8634 Approximate Method of Calculation of Inviscid Hypersonic Flow

Authors: F. Sokhanvar, A. B. Khoshnevis

Abstract:

In the present work steady inviscid hypersonic flows are calculated by approximate Method. Maslens' inverse method is the chosen approximate method. For the inverse problem, parabolic shock shape is chosen for the two-dimensional flow, and the body shape and flow field are calculated using Maslen's method. For the axisymmetric inverse problem paraboloidal shock is chosen and the surface distribution of pressure is obtained.

Keywords: Hypersonic flow, Inverse problem method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3016
8633 Evaluation of the Burden of Taxation Received by Households in Lithuania

Authors: V. Boguslauskas, G. Jakstonyte, L. Giriunas

Abstract:

Many foreign and Lithuanian scientists, analyzing the evaluation of the tax system in respect of the burden of taxation, agree that the latter, in principle, depends on how many individuals and what units of the residents constitute a household. Therefore, the aim of scientific research is to substantiate or to deny the significance of a household, but not a resident, as a statistical unit, during the evaluation of tax system, to be precise, determination of the value of the burden of taxation. A performed scientific research revealed that evaluation of the tax system in respect of a household, but not a resident, as a statistical unit, allows not only to evaluate the efficiency of the tax system more objectively, but also to forecast practicably existing poverty line, burden of taxation, and to capacitate the initiation of efficient decisions in social and tax fields creating the environment of existence.

Keywords: burden of taxation, household, tax systemevaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
8632 Convergence Analysis of the Generalized Alternating Two-Stage Method

Authors: Guangbin Wang, Liangliang Li, Fuping Tan

Abstract:

In this paper, we give the generalized alternating twostage method in which the inner iterations are accomplished by a generalized alternating method. And we present convergence results of the method for solving nonsingular linear systems when the coefficient matrix of the linear system is a monotone matrix or an H-matrix.

Keywords: Generalized alternating two-stage method, linear system, convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216