Search results for: Statistical method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8887

Search results for: Statistical method

8407 Local Error Control in the RK5GL3 Method

Authors: J.S.C. Prentice

Abstract:

The RK5GL3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on a combination of a fifth-order Runge-Kutta method and 3-point Gauss-Legendre quadrature. In this paper we describe an effective local error control algorithm for RK5GL3, which uses local extrapolation with an eighth-order Runge-Kutta method in tandem with RK5GL3, and a Hermite interpolating polynomial for solution estimation at the Gauss-Legendre quadrature nodes.

Keywords: RK5GL3, RKrGLm, Runge-Kutta, Gauss-Legendre, Hermite interpolating polynomial, initial value problem, local error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
8406 Restarted GMRES Method Augmented with the Combination of Harmonic Ritz Vectors and Error Approximations

Authors: Qiang Niu, Linzhang Lu

Abstract:

Restarted GMRES methods augmented with approximate eigenvectors are widely used for solving large sparse linear systems. Recently a new scheme of augmenting with error approximations is proposed. The main aim of this paper is to develop a restarted GMRES method augmented with the combination of harmonic Ritz vectors and error approximations. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate the small eigenvalues in magnitude that may hamper the convergence of the method and (ii) partially recover the global optimality lost due to restarting. The effectiveness and efficiency of the new method are demonstrated through various numerical examples.

Keywords: Arnoldi process, GMRES, Krylov subspace, systems of linear equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
8405 Extended Cubic B-spline Interpolation Method Applied to Linear Two-Point Boundary Value Problems

Authors: Nur Nadiah Abd Hamid, Ahmad Abd. Majid, Ahmad Izani Md. Ismail

Abstract:

Linear two-point boundary value problem of order two is solved using extended cubic B-spline interpolation method. There is one free parameters, λ, that control the tension of the solution curve. For some λ, this method produced better results than cubic B-spline interpolation method.

Keywords: two-point boundary value problem, B-spline, extendedcubic B-spline.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
8404 Adomian’s Decomposition Method to Functionally Graded Thermoelastic Materials with Power Law

Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi

Abstract:

This paper presents an iteration method for the numerical solutions of a one-dimensional problem of generalized thermoelasticity with one relaxation time under given initial and boundary conditions. The thermoelastic material with variable properties as a power functional graded has been considered. Adomian’s decomposition techniques have been applied to the governing equations. The numerical results have been calculated by using the iterations method with a certain algorithm. The numerical results have been represented in figures, and the figures affirm that Adomian’s decomposition method is a successful method for modeling thermoelastic problems. Moreover, the empirical parameter of the functional graded, and the lattice design parameter have significant effects on the temperature increment, the strain, the stress, the displacement.

Keywords: Adomian, Decomposition Method, Generalized Thermoelasticity, algorithm, empirical parameter, lattice design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
8403 Aircraft Gas Turbine Engines Technical Condition Identification System

Authors: A. M. Pashayev, C. Ardil, D. D. Askerov, R. A. Sadiqov, P. S. Abdullayev

Abstract:

In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.

Keywords: Gas turbine engines, neural networks, fuzzy logic, fuzzy statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
8402 Assessing Basic Computer Applications’ Skills of College-Level Students in Saudi Arabia

Authors: Mohammed A. Gharawi, Majed M. Khoja

Abstract:

This paper is a report on the findings of a study conducted at the Institute of Public Administration (IPA) in Saudi Arabia. The paper applied both qualitative and quantitative approaches to assess the levels of basic computer applications’ skills among students enrolled in the preparatory programs of the institution. Qualitative data have been collected from semi-structured interviews with the instructors who have previously been assigned to teach Introduction to information technology courses. Quantitative data were collected by executing a self-report questionnaire and a written statistical test. Three hundred eighty enrolled students responded to the questionnaire and one hundred forty two accomplished the statistical test. The results indicate the lack of necessary skills to deal with computer applications among most of the students who are enrolled in the IPA’s preparatory programs.

Keywords: Assessment, Computer Applications, Computer Literacy, Institute of Public Administration, Saudi Arabia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642
8401 Heuristic Set-Covering-Based Postprocessing for Improving the Quine-McCluskey Method

Authors: Miloš Šeda

Abstract:

Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.

Keywords: Boolean algebra, Karnaugh map, Quine-McCluskey method, set covering problem, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772
8400 Enhanced Parallel-Connected Comb Filter Method for Multiple Pitch Estimation

Authors: Taro Matsuno, Yuta Otani, Ryo Tanaka, Kaori Ikezaki, Hitoshi Yamamoto, Masaru Fujieda, Yoshihisa Ishida

Abstract:

This paper presents an improvement method of the multiple pitch estimation algorithm using comb filters. Conventionally the pitch was estimated by using parallel -connected comb filters method (PCF). However, PCF has problems which often fail in the pitch estimation when there is the fundamental frequency of higher tone near harmonics of lower tone. Therefore the estimation is assigned to a wrong note when shared frequencies happen. This issue often occurs in estimating octave 3 or more. Proposed method, for solving the problem, estimates the pitch with every harmonic instead of every octave. As a result, our method reaches the accuracy of more than 80%.

Keywords: music transcription, pitch estimation, comb filter, fractional delay

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
8399 Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition

Authors: Md. Khademul Islam Molla, Akimasa Sumi, M. Sayedur Rahman

Abstract:

The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.

Keywords: Empirical mode decomposition, instantaneous frequency, Hilbert spectrum, Chi-square distribution, anthropogenic impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
8398 Identifying an Unknown Source in the Poisson Equation by a Modified Tikhonov Regularization Method

Authors: Ou Xie, Zhenyu Zhao

Abstract:

In this paper, we consider the problem for identifying the unknown source in the Poisson equation. A modified Tikhonov regularization method is presented to deal with illposedness of the problem and error estimates are obtained with an a priori strategy and an a posteriori choice rule to find the regularization parameter. Numerical examples show that the proposed method is effective and stable.

Keywords: Ill-posed problem, Unknown source, Poisson equation, Tikhonov regularization method, Discrepancy principle

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
8397 Evaluation of Total Cross Section of Photo-Ionization of Helium in Weak Field on Base of Trajectory Method

Authors: Alexander B. Bichkov, Valery V. Smirnov

Abstract:

Total cross section of helium atom photo-ionization by weak short pulse is calculated using the variant of trajectory method, developed in our earlier work. The method enables simple estimation of total ionization probability (or cross section) without integration of differential one.

Keywords: Evaluation of Photo-Ionization, Helium, Trajectory Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
8396 Fuzzy based Security Threshold Determining for the Statistical En-Route Filtering in Sensor Networks

Authors: Hae Young Lee, Tae Ho Cho

Abstract:

In many sensor network applications, sensor nodes are deployed in open environments, and hence are vulnerable to physical attacks, potentially compromising the node's cryptographic keys. False sensing report can be injected through compromised nodes, which can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. Ye et al. proposed a statistical en-route filtering scheme (SEF) to detect such false reports during the forwarding process. In this scheme, the choice of a security threshold value is important since it trades off detection power and overhead. In this paper, we propose a fuzzy logic for determining a security threshold value in the SEF based sensor networks. The fuzzy logic determines a security threshold by considering the number of partitions in a global key pool, the number of compromised partitions, and the energy level of nodes. The fuzzy based threshold value can conserve energy, while it provides sufficient detection power.

Keywords: Fuzzy logic, security, sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
8395 Solution of Density Dependent Nonlinear Reaction-Diffusion Equation Using Differential Quadrature Method

Authors: Gülnihal Meral

Abstract:

In this study, the density dependent nonlinear reactiondiffusion equation, which arises in the insect dispersal models, is solved using the combined application of differential quadrature method(DQM) and implicit Euler method. The polynomial based DQM is used to discretize the spatial derivatives of the problem. The resulting time-dependent nonlinear system of ordinary differential equations(ODE-s) is solved by using implicit Euler method. The computations are carried out for a Cauchy problem defined by a onedimensional density dependent nonlinear reaction-diffusion equation which has an exact solution. The DQM solution is found to be in a very good agreement with the exact solution in terms of maximum absolute error. The DQM solution exhibits superior accuracy at large time levels tending to steady-state. Furthermore, using an implicit method in the solution procedure leads to stable solutions and larger time steps could be used.

Keywords: Density Dependent Nonlinear Reaction-Diffusion Equation, Differential Quadrature Method, Implicit Euler Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238
8394 Brain MRI Segmentation and Lesions Detection by EM Algorithm

Authors: Mounira Rouaïnia, Mohamed Salah Medjram, Noureddine Doghmane

Abstract:

In Multiple Sclerosis, pathological changes in the brain results in deviations in signal intensity on Magnetic Resonance Images (MRI). Quantitative analysis of these changes and their correlation with clinical finding provides important information for diagnosis. This constitutes the objective of our work. A new approach is developed. After the enhancement of images contrast and the brain extraction by mathematical morphology algorithm, we proceed to the brain segmentation. Our approach is based on building statistical model from data itself, for normal brain MRI and including clustering tissue type. Then we detect signal abnormalities (MS lesions) as a rejection class containing voxels that are not explained by the built model. We validate the method on MR images of Multiple Sclerosis patients by comparing its results with those of human expert segmentation.

Keywords: EM algorithm, Magnetic Resonance Imaging, Mathematical morphology, Markov random model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
8393 Development of Rock Engineering System-Based Models for Tunneling Progress Analysis and Evaluation: Case Study of Tailrace Tunnel of Azad Power Plant Project

Authors: S. Golmohammadi, M. Noorian Bidgoli

Abstract:

Tunneling progress is a key parameter in the blasting method of tunneling. Taking measures to enhance tunneling advance can limit the progress distance without a supporting system, subsequently reducing or eliminating the risk of damage. This paper focuses on modeling tunneling progress using three main groups of parameters (tunneling geometry, blasting pattern, and rock mass specifications) based on the Rock Engineering Systems (RES) methodology. In the proposed models, four main effective parameters on tunneling progress are considered as inputs (RMR, Q-system, Specific charge of blasting, Area), with progress as the output. Data from 86 blasts conducted at the tailrace tunnel in the Azad Dam, western Iran, were used to evaluate the progress value for each blast. The results indicated that, for the 86 blasts, the progress of the estimated model aligns mostly with the measured progress. This paper presents a method for building the interaction matrix (statistical base) of the RES model. Additionally, a comparison was made between the results of the new RES-based model and a Multi-Linear Regression (MLR) analysis model. In the RES-based model, the effective parameters are RMR (35.62%), Q (28.6%), q (specific charge of blasting) (20.35%), and A (15.42%), respectively, whereas for MLR analysis, the main parameters are RMR, Q (system), q, and A. These findings confirm the superior performance of the RES-based model over the other proposed models.

Keywords: Rock Engineering Systems, tunneling progress, Multi Linear Regression, Specific charge of blasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69
8392 Impact of Fixation Time on Subjective Video Quality Metric: a New Proposal for Lossy Compression Impairment Assessment

Authors: M. G. Albanesi, R. Amadeo

Abstract:

In this paper, a new approach for quality assessment tasks in lossy compressed digital video is proposed. The research activity is based on the visual fixation data recorded by an eye tracker. The method involved both a new paradigm for subjective quality evaluation and the subsequent statistical analysis to match subjective scores provided by the observer to the data obtained from the eye tracker experiments. The study brings improvements to the state of the art, as it solves some problems highlighted in literature. The experiments prove that data obtained from an eye tracker can be used to classify videos according to the level of impairment due to compression. The paper presents the methodology, the experimental results and their interpretation. Conclusions suggest that the eye tracker can be useful in quality assessment, if data are collected and analyzed in a proper way.

Keywords: eye tracker, video compression, video qualityassessment, visual attention

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
8391 An Adaptive Fuzzy Clustering Approach for the Network Management

Authors: Amal Elmzabi, Mostafa Bellafkih, Mohammed Ramdani

Abstract:

The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.

Keywords: Fuzzy entropy, fuzzy inference systems, genetic algorithms, network management, subtractive clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
8390 Average Switching Thresholds and Average Throughput for Adaptive Modulation using Markov Model

Authors: Essam S. Altubaishi

Abstract:

The motivation for adaptive modulation and coding is to adjust the method of transmission to ensure that the maximum efficiency is achieved over the link at all times. The receiver estimates the channel quality and reports it back to the transmitter. The transmitter then maps the reported quality into a link mode. This mapping however, is not a one-to-one mapping. In this paper we investigate a method for selecting the proper modulation scheme. This method can dynamically adapt the mapping of the Signal-to- Noise Ratio (SNR) into a link mode. It enables the use of the right modulation scheme irrespective of changes in the channel conditions by incorporating errors in the received data. We propose a Markov model for this method, and use it to derive the average switching thresholds and the average throughput. We show that the average throughput of this method outperforms the conventional threshold method.

Keywords: Adaptive modulation and coding, CDMA, Markov model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
8389 Numerical Treatment of Block Method for the Solution of Ordinary Differential Equations

Authors: A. M. Sagir

Abstract:

Discrete linear multistep block method of uniform order for the solution of first order initial value problems (IVP­s­) in ordinary differential equations (ODE­s­) is presented in this paper. The approach of interpolation and collocation approximation are adopted in the derivation of the method which is then applied to first order ordinary differential equations with associated initial conditions. The continuous hybrid formulations enable us to differentiate and evaluate at some grids and off – grid points to obtain four discrete schemes, which were used in block form for parallel or sequential solutions of the problems. Furthermore, a stability analysis and efficiency of the block method are tested on ordinary differential equations, and the results obtained compared favorably with the exact solution.

Keywords: Block Method, First Order Ordinary Differential Equations, Hybrid, Self starting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2752
8388 A Ground Observation Based Climatology of Winter Fog: Study over the Indo-Gangetic Plains, India

Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva

Abstract:

Every year, fog formation over the Indo-Gangetic Plains (IGPs) of Indian region during the winter months of December and January is believed to create numerous hazards, inconvenience, and economic loss to the inhabitants of this densely populated region of Indian subcontinent. The aim of the paper is to analyze the spatial and temporal variability of winter fog over IGPs. Long term ground observations of visibility and other meteorological parameters (1971-2010) have been analyzed to understand the formation of fog phenomena and its relevance during the peak winter months of January and December over IGP of India. In order to examine the temporal variability, time series and trend analysis were carried out by using the Mann-Kendall Statistical test. Trend analysis performed by using the Mann-Kendall test, accepts the alternate hypothesis with 95% confidence level indicating that there exists a trend. Kendall tau’s statistics showed that there exists a positive correlation between time series and fog frequency. Further, the Theil and Sen’s median slope estimate showed that the magnitude of trend is positive. Magnitude is higher during January compared to December for the entire IGP except in December when it is high over the western IGP. Decade wise time series analysis revealed that there has been continuous increase in fog days. The net overall increase of 99 % was observed over IGP in last four decades. Diurnal variability and average daily persistence were computed by using descriptive statistical techniques. Geo-statistical analysis of fog was carried out to understand the spatial variability of fog. Geo-statistical analysis of fog revealed that IGP is a high fog prone zone with fog occurrence frequency of more than 66% days during the study period. Diurnal variability indicates the peak occurrence of fog is between 06:00 and 10:00 local time and average daily fog persistence extends to 5 to 7 hours during the peak winter season. The results would offer a new perspective to take proactive measures in reducing the irreparable damage that could be caused due to changing trends of fog.

Keywords: Fog, climatology, Mann-Kendall test, trend analysis, spatial variability, temporal variability, visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
8387 Optimizing Performance of Tablet's Direct Compression Process Using Fuzzy Goal Programming

Authors: Abbas Al-Refaie

Abstract:

This paper aims at improving the performance of the tableting process using statistical quality control and fuzzy goal programming. The tableting process was studied. Statistical control tools were used to characterize the existing process for three critical responses including the averages of a tablet’s weight, hardness, and thickness. At initial process factor settings, the estimated process capability index values for the tablet’s averages of weight, hardness, and thickness were 0.58, 3.36, and 0.88, respectively. The L9 array was utilized to provide experimentation design. Fuzzy goal programming was then employed to find the combination of optimal factor settings. Optimization results showed that the process capability index values for a tablet’s averages of weight, hardness, and thickness were improved to 1.03, 4.42, and 1.42, respectively. Such improvements resulted in significant savings in quality and production costs.

Keywords: Fuzzy goal programming, control charts, process capability, tablet optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
8386 Multivariate Statistical Analysis of Decathlon Performance Results in Olympic Athletes (1988-2008)

Authors: Jaebum Park, Vladimir M. Zatsiorsky

Abstract:

The performance results of the athletes competed in the 1988-2008 Olympic Games were analyzed (n = 166). The data were obtained from the IAAF official protocols. In the principal component analysis, the first three principal components explained 70% of the total variance. In the 1st principal component (with 43.1% of total variance explained) the largest factor loadings were for 100m (0.89), 400m (0.81), 110m hurdle run (0.76), and long jump (–0.72). This factor can be interpreted as the 'sprinting performance'. The loadings on the 2nd factor (15.3% of the total variance) presented a counter-intuitive throwing-jumping combination: the highest loadings were for throwing events (javelin throwing 0.76; shot put 0.74; and discus throwing 0.73) and also for jumping events (high jump 0.62; pole vaulting 0.58). On the 3rd factor (11.6% of total variance), the largest loading was for 1500 m running (0.88); all other loadings were below 0.4.

Keywords: Decathlon, principal component analysis, Olympic Games, multivariate statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2776
8385 EEG Spikes Detection, Sorting, and Localization

Authors: Mazin Z. Othman, Maan M. Shaker, Mohammed F. Abdullah

Abstract:

This study introduces a new method for detecting, sorting, and localizing spikes from multiunit EEG recordings. The method combines the wavelet transform, which localizes distinctive spike features, with Super-Paramagnetic Clustering (SPC) algorithm, which allows automatic classification of the data without assumptions such as low variance or Gaussian distributions. Moreover, the method is capable of setting amplitude thresholds for spike detection. The method makes use of several real EEG data sets, and accordingly the spikes are detected, clustered and their times were detected.

Keywords: EEG time localizations, EEG spike detection, superparamagnetic algorithm, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2518
8384 CART Method for Modeling the Output Power of Copper Bromide Laser

Authors: Iliycho P. Iliev, Desislava S. Voynikova, Snezhana G. Gocheva-Ilieva

Abstract:

This paper examines the available experiment data for a copper bromide vapor laser (CuBr laser), emitting at two wavelengths - 510.6 and 578.2nm. Laser output power is estimated based on 10 independent input physical parameters. A classification and regression tree (CART) model is obtained which describes 97% of data. The resulting binary CART tree specifies which input parameters influence considerably each of the classification groups. This allows for a technical assessment that indicates which of these are the most significant for the manufacture and operation of the type of laser under consideration. The predicted values of the laser output power are also obtained depending on classification. This aids the design and development processes considerably.

Keywords: Classification and regression trees (CART), Copper Bromide laser (CuBr laser), laser generation, nonparametric statistical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
8383 A New Method for Rapid DNA Extraction from Artemia (Branchiopoda, Crustacea)

Authors: R. Manaffar, R. Maleki, S. Zare, N. Agh, S. Soltanian, B. Sehatnia, P. Sorgeloos, P. Bossier, G. Van Stappen

Abstract:

Artemia is one of the most conspicuous invertebrates associated with aquaculture. It can be considered as a model organism, offering numerous advantages for comprehensive and multidisciplinary studies using morphologic or molecular methods. Since DNA extraction is an important step of any molecular experiment, a new and a rapid method of DNA extraction from adult Artemia was described in this study. Besides, the efficiency of this technique was compared with two widely used alternative techniques, namely Chelex® 100 resin and SDS-chloroform methods. Data analysis revealed that the new method is the easiest and the most cost effective method among the other methods which allows a quick and efficient extraction of DNA from the adult animal.

Keywords: APD, Artemia, DNA extraction, Molecularexperiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3153
8382 Using Data Mining for Learning and Clustering FCM

Authors: Somayeh Alizadeh, Mehdi Ghazanfari, Mohammad Fathian

Abstract:

Fuzzy Cognitive Maps (FCMs) have successfully been applied in numerous domains to show relations between essential components. In some FCM, there are more nodes, which related to each other and more nodes means more complex in system behaviors and analysis. In this paper, a novel learning method used to construct FCMs based on historical data and by using data mining and DEMATEL method, a new method defined to reduce nodes number. This method cluster nodes in FCM based on their cause and effect behaviors.

Keywords: Clustering, Data Mining, Fuzzy Cognitive Map(FCM), Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
8381 A Numerical Method for Diffusion and Cahn-Hilliard Equations on Evolving Spherical Surfaces

Authors: Jyh-Yang Wu, Sheng-Gwo Chen

Abstract:

In this paper, we present a simple effective numerical geometric method to estimate the divergence of a vector field over a curved surface. The conservation law is an important principle in physics and mathematics. However, many well-known numerical methods for solving diffusion equations do not obey conservation laws. Our presented method in this paper combines the divergence theorem with a generalized finite difference method and obeys the conservation law on discrete closed surfaces. We use the similar method to solve the Cahn-Hilliard equations on evolving spherical surfaces and observe stability results in our numerical simulations.

Keywords: Conservation laws, diffusion equations, Cahn-Hilliard Equations, evolving surfaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
8380 A Comprehensive Method of Fault Detection and Isolation Based On Testability Modeling Data

Authors: Junyou Shi, Weiwei Cui

Abstract:

Testability modeling is a commonly used method in testability design and analysis of system. A dependency matrix will be obtained from testability modeling, and we will give a quantitative evaluation about fault detection and isolation. Based on the dependency matrix, we can obtain the diagnosis tree. The tree provides the procedures of the fault detection and isolation. But the dependency matrix usually includes built-in test (BIT) and manual test in fact. BIT runs the test automatically and is not limited by the procedures. The method above cannot give a more efficient diagnosis and use the advantages of the BIT. A Comprehensive method of fault detection and isolation is proposed. This method combines the advantages of the BIT and Manual test by splitting the matrix. The result of the case study shows that the method is effective.

Keywords: BIT, fault detection, fault isolation, testability modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
8379 Estimation of Groundwater Recovery by Recharge in the Agricultural Area

Authors: Tsutomu Ichikawa

Abstract:

The Kumamoto area, Kyushu, Japan has 1,041km2 in area and about 1milion in population. This area is a greatest area in Japan which depends on groundwater for all of drinking water. Quantity of this local groundwater use is about 200MCM during the year. It is understood that the main recharging area of groundwater exist in the rice field zone which have high infiltrate height ahead of 100mm/ day of the irrigated water located in the middle area of the Shira-River Basin. However, by decrease of the paddy-rice planting area by urbanization and an acreage reduction policy, the groundwater income and expenditure turned worse. Then Kumamoto city and four companies expended financial support to increase recharging water to underground by ponded water in the field from 2004. In this paper, the author reported the situation of recovery of groundwater by recharge and estimates the efficiency of recharge by statistical method.

Keywords: Groundwater recharge, groundwater level, spring water, paddy field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
8378 Forecasting Exchange Rate between Thai Baht and the US Dollar Using Time Series Analysis

Authors: Kunya Bowornchockchai

Abstract:

The objective of this research is to forecast the monthly exchange rate between Thai baht and the US dollar and to compare two forecasting methods. The methods are Box-Jenkins’ method and Holt’s method. Results show that the Box-Jenkins’ method is the most suitable method for the monthly Exchange Rate between Thai Baht and the US Dollar. The suitable forecasting model is ARIMA (1,1,0)  without constant and the forecasting equation is Yt = Yt-1 + 0.3691 (Yt-1 - Yt-2) When Yt  is the time series data at time t, respectively.

Keywords: Box–Jenkins Method, Holt’s Method, Mean Absolute Percentage Error (MAPE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673