Search results for: Generalized singular-value decomposition.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 717

Search results for: Generalized singular-value decomposition.

537 Confidence Interval for the Inverse of a Normal Mean with a Known Coefficient of Variation

Authors: Arunee Wongkha, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the inverse of a normal mean with a known coefficient of variation. One of new confidence intervals for the inverse of a normal mean with a known coefficient of variation is constructed based on the pivotal statistic Z where Z is a standard normal distribution and another confidence interval is constructed based on the generalized confidence interval, presented by Weerahandi. We examine the performance of these confidence intervals in terms of coverage probabilities and average lengths via Monte Carlo simulation.

Keywords: The inverse of a normal mean, confidence interval, generalized confidence intervals, known coefficient of variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
536 Using Heuristic Rules from Sentence Decomposition of Experts- Summaries to Detect Students- Summarizing Strategies

Authors: Norisma Idris, Sapiyan Baba, Rukaini Abdullah

Abstract:

Summarizing skills have been introduced to English syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system that detects the strategies used by the students in producing their summaries. Sentence decomposition of expert-written summaries is used to analyze how experts produce their summary sentences. From the analysis, we identified seven summarizing strategies and their rules which are then transformed into a set of heuristic rules on how to determine the summarizing strategies. We developed an algorithm based on the heuristic rules and performed some experiments to evaluate and support the technique proposed.

Keywords: Summarizing strategies, heuristic rules, sentencedecomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
535 Nonlinear Static Analysis of Laminated Composite Hollow Beams with Super-Elliptic Cross-Sections

Authors: G. Akgun, I. Algul, H. Kurtaran

Abstract:

In this paper geometrically nonlinear static behavior of laminated composite hollow super-elliptic beams is investigated using generalized differential quadrature method. Super-elliptic beam can have both oval and elliptic cross-sections by adjusting parameters in super-ellipse formulation (also known as Lamé curves). Equilibrium equations of super-elliptic beam are obtained using the virtual work principle. Geometric nonlinearity is taken into account using von-Kármán nonlinear strain-displacement relations. Spatial derivatives in strains are expressed with the generalized differential quadrature method. Transverse shear effect is considered through the first-order shear deformation theory. Static equilibrium equations are solved using Newton-Raphson method. Several composite super-elliptic beam problems are solved with the proposed method. Effects of layer orientations of composite material, boundary conditions, ovality and ellipticity on bending behavior are investigated.

Keywords: Generalized differential quadrature, geometric nonlinearity, laminated composite, super-elliptic cross-section.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339
534 Fuzzy Join Dependency in Fuzzy Relational Databases

Authors: P. C. Saxena, D. K. Tayal

Abstract:

The join dependency provides the basis for obtaining lossless join decomposition in a classical relational schema. The existence of Join dependency shows that that the tables always represent the correct data after being joined. Since the classical relational databases cannot handle imprecise data, they were extended to fuzzy relational databases so that uncertain, ambiguous, imprecise and partially known information can also be stored in databases in a formal way. However like classical databases, the fuzzy relational databases also undergoes decomposition during normalization, the issue of joining the decomposed fuzzy relations remains intact. Our effort in the present paper is to emphasize on this issue. In this paper we define fuzzy join dependency in the framework of type-1 fuzzy relational databases & type-2 fuzzy relational databases using the concept of fuzzy equality which is defined using fuzzy functions. We use the fuzzy equi-join operator for computing the fuzzy equality of two attribute values. We also discuss the dependency preservation property on execution of this fuzzy equi- join and derive the necessary condition for the fuzzy functional dependencies to be preserved on joining the decomposed fuzzy relations. We also derive the conditions for fuzzy join dependency to exist in context of both type-1 and type-2 fuzzy relational databases. We find that unlike the classical relational databases even the existence of a trivial join dependency does not ensure lossless join decomposition in type-2 fuzzy relational databases. Finally we derive the conditions for the fuzzy equality to be non zero and the qualification of an attribute for fuzzy key.

Keywords: Fuzzy - equi join, fuzzy functions, fuzzy join dependency, type-1 fuzzy relational database, type-2 fuzzy relational database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
533 A New Proof on the Growth Factor in Gaussian Elimination for Generalized Higham Matrices

Authors: Qian-Ping Guo, Hou-Biao Li

Abstract:

The generalized Higham matrix is a complex symmetric matrix A = B + iC, where both B ∈ Cn×n and C ∈ Cn×n are Hermitian positive definite, and i = √−1 is the imaginary unit. The growth factor in Gaussian elimination is less than 3√2 for this kind of matrices. In this paper, we give a new brief proof on this result by different techniques, which can be understood very easily, and obtain some new findings.

Keywords: CSPD matrix, positive definite, Schur complement, Higham matrix, Gaussian elimination, Growth factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
532 Enhancement of Pulsed Eddy Current Response Based on Power Spectral Density after Continuous Wavelet Transform Decomposition

Authors: A. Benyahia, M. Zergoug, M. Amir, M. Fodil

Abstract:

The main objective of this work is to enhance the Pulsed Eddy Current (PEC) response from the aluminum structure using signal processing. Cracks and metal loss in different structures cause changes in PEC response measurements. In this paper, time-frequency analysis is used to represent PEC response, which generates a large quantity of data and reduce the noise due to measurement. Power Spectral Density (PSD) after Wavelet Decomposition (PSD-WD) is proposed for defect detection. The experimental results demonstrate that the cracks in the surface can be extracted satisfactorily by the proposed methods. The validity of the proposed method is discussed.

Keywords: NDT, pulsed eddy current, continuous wavelet transform, Mexican hat wavelet mother, defect detection, power spectral density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706
531 A New Approach to ECG Biometric Systems: A Comparitive Study between LPC and WPD Systems

Authors: Justin Leo Cheang Loong, Khazaimatol S Subari, Rosli Besar, Muhammad Kamil Abdullah

Abstract:

In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.

Keywords: biometric, ecg, linear predictive coding, wavelet packet decomposition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2836
530 New Exact Three-Wave Solutions for the (2+1)-Dimensional Asymmetric Nizhnik-Novikov-Veselov System

Authors: Fadi Awawdeh, O. Alsayyed

Abstract:

New exact three-wave solutions including periodic two-solitary solutions and doubly periodic solitary solutions for the (2+1)-dimensional asymmetric Nizhnik-Novikov- Veselov (ANNV) system are obtained using Hirota's bilinear form and generalized three-wave type of ansatz approach. It is shown that the generalized three-wave method, with the help of symbolic computation, provides an e¤ective and powerful mathematical tool for solving high dimensional nonlinear evolution equations in mathematical physics.

Keywords: Soliton Solution, Hirota Bilinear Method, ANNV System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
529 Application of Multi-Dimensional Principal Component Analysis to Medical Data

Authors: Naoki Yamamoto, Jun Murakami, Chiharu Okuma, Yutaro Shigeto, Satoko Saito, Takashi Izumi, Nozomi Hayashida

Abstract:

Multi-dimensional principal component analysis (PCA) is the extension of the PCA, which is used widely as the dimensionality reduction technique in multivariate data analysis, to handle multi-dimensional data. To calculate the PCA the singular value decomposition (SVD) is commonly employed by the reason of its numerical stability. The multi-dimensional PCA can be calculated by using the higher-order SVD (HOSVD), which is proposed by Lathauwer et al., similarly with the case of ordinary PCA. In this paper, we apply the multi-dimensional PCA to the multi-dimensional medical data including the functional independence measure (FIM) score, and describe the results of experimental analysis.

Keywords: multi-dimensional principal component analysis, higher-order SVD (HOSVD), functional independence measure (FIM), medical data, tensor decomposition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2453
528 Efficiency of Different GLR Test-statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

In this work the characteristics of spatial signal detec¬tion from an antenna array in various sample cases are investigated. Cases for a various number of available prior information about the received signal and the background noise are considered. The spatial difference between a signal and noise is only used. The performance characteristics and detecting curves are presented. All test-statistics are obtained on the basis of the generalized likelihood ratio (GLR). The received results are correct for a short and long sample.

Keywords: GLR test-statistic, detection task, generalized likelihood ratio, antenna array, detection curves, performance characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
527 Preparation and Characterization of Organic Silver Precursors for Conductive Ink

Authors: Wendong Yang, Changhai Wang, Valeria Arrighi

Abstract:

Low ink sintering temperature is desired for flexible electronics, as it would widen the application of the ink on temperature-sensitive substrates where the selection of silver precursor is very critical. In this paper, four types of organic silver precursors, silver carbonate, silver oxalate, silver tartrate and silver itaconate, were synthesized using an ion exchange method, firstly. Various characterization methods were employed to investigate their physical phase, chemical composition, morphologies and thermal decomposition behavior. It was found that silver oxalate had the ideal thermal property and showed the lowest decomposition temperature. An ink was then formulated by complexing the as-prepared silver oxalate with ethylenediamine in organic solvents. Results show that a favorable conductive film with a uniform surface structure consisting of silver nanoparticles and few voids could be produced from the ink at a sintering temperature of 150 °C.

Keywords: Conductive ink, electrical property, film, organic silver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
526 A Generalized Coordination Setting Method for Distribution Systems with Closed-loop

Authors: Kang-Le Guan, Seung-Jae Lee, Myeon-Song Choi

Abstract:

The protection issues in distribution systems with open and closed-loop are studied, and a generalized protection setting scheme based on the traditional over current protection theories is proposed to meet the new requirements. The setting method is expected to be easier realized using computer program, so that the on-line adaptive setting for coordination in distribution system can be implemented. An automatic setting program is created and several cases are taken into practice. The setting results are verified by the coordination curves of the protective devices which are plotted using MATLAB.

Keywords: protection setting, on-line system analysis, over current protection, closed-loop distribution system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
525 A New Heuristic Approach for the Large-Scale Generalized Assignment Problem

Authors: S. Raja Balachandar, K.Kannan

Abstract:

This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.

Keywords: Combinatorial Optimization Problem, Generalized Assignment Problem, Intercept Matrix, Heuristic, Computational Complexity, NP-Hard Problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
524 EMD-Based Signal Noise Reduction

Authors: A.O. Boudraa, J.C. Cexus, Z. Saidi

Abstract:

This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.

Keywords: Empirical mode decomposition, Signal denoisingnonstationary process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3908
523 Analyzing the Factors Influencing Exclusive Breastfeeding Using the Generalized Poisson Regression Model

Authors: Cheika Jahangeer, Naushad Mamode Khan, Maleika Heenaye-Mamode Khan

Abstract:

Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is of fundamental importance because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in developed countries, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we study the factors that influence exclusive breastfeeding and use the Generalized Poisson regression model to analyze the practices of exclusive breastfeeding in Mauritius. We develop two sets of quasi-likelihood equations (QLE)to estimate the parameters.

Keywords: Exclusive breastfeeding, Regression model, Quasilikelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
522 The First Integral Approach in Stability Problem of Large Scale Nonlinear Dynamical Systems

Authors: M. Kidouche, H. Habbi, M. Zelmat, S. Grouni

Abstract:

In analyzing large scale nonlinear dynamical systems, it is often desirable to treat the overall system as a collection of interconnected subsystems. Solutions properties of the large scale system are then deduced from the solution properties of the individual subsystems and the nature of the interconnections. In this paper a new approach is proposed for the stability analysis of large scale systems, which is based upon the concept of vector Lyapunov functions and the decomposition methods. The present results make use of graph theoretic decomposition techniques in which the overall system is partitioned into a hierarchy of strongly connected components. We show then, that under very reasonable assumptions, the overall system is stable once the strongly connected subsystems are stables. Finally an example is given to illustrate the constructive methodology proposed.

Keywords: Comparison principle, First integral, Large scale system, Lyapunov stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
521 Analyzing Data on Breastfeeding Using Dispersed Statistical Models

Authors: Naushad Mamode Khan, Cheika Jahangeer, Maleika Heenaye-Mamode Khan

Abstract:

Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is very important as it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, it helps to reduce the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we make a survey of the factors that influence exclusive breastfeeding and use two dispersed statistical models to analyze data. The models are the Generalized Poisson regression model and the Com-Poisson regression models.

Keywords: Exclusive breastfeeding, regression model, generalized poisson, com-poisson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514
520 Transverse Vibration of Non-Homogeneous Rectangular Plates of Variable Thickness Using GDQ

Authors: R. Saini, R. Lal

Abstract:

The effect of non-homogeneity on the free transverse vibration of thin rectangular plates of bilinearly varying thickness has been analyzed using generalized differential quadrature (GDQ) method. The non-homogeneity of the plate material is assumed to arise due to linear variations in Young’s modulus and density of the plate material with the in-plane coordinates x and y. Numerical results have been computed for fully clamped and fully simply supported boundary conditions. The solution procedure by means of GDQ method has been implemented in a MATLAB code. The effect of various plate parameters has been investigated for the first three modes of vibration. A comparison of results with those available in literature has been presented.

Keywords: Bilinear thickness, generalized differential quadrature (GDQ), non-homogeneous, Rectangular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433
519 Effective Charge Coupling in Low Dimensional Doped Quantum Antiferromagnets

Authors: Suraka Bhattacharjee, Ranjan Chaudhury

Abstract:

The interaction between the charge degrees of freedom for itinerant antiferromagnets is investigated in terms of generalized charge stiffness constant corresponding to nearest neighbour t-J model and t1-t2-t3-J model. The low dimensional hole doped antiferromagnets are the well known systems that can be described by the t-J-like models. Accordingly, we have used these models to investigate the fermionic pairing possibilities and the coupling between the itinerant charge degrees of freedom. A detailed comparison between spin and charge couplings highlights that the charge and spin couplings show very similar behaviour in the over-doped region, whereas, they show completely different trends in the lower doping regimes. Moreover, a qualitative equivalence between generalized charge stiffness and effective Coulomb interaction is also established based on the comparisons with other theoretical and experimental results. Thus it is obvious that the enhanced possibility of fermionic pairing is inherent in the reduction of Coulomb repulsion with increase in doping concentration. However, the increased possibility can not give rise to pairing without the presence of any other pair producing mechanism outside the t-J model. Therefore, one can conclude that the t-J-like models themselves solely are not capable of producing conventional momentum-based superconducting pairing on their own.

Keywords: Generalized charge stiffness constant, charge coupling, effective Coulomb interaction, t-J-like models, momentum-space pairing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 554
518 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: Customized thresholding, ECG signal, EMD, hard thresholding, Soft-thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1040
517 Existence and Uniqueness of Positive Solution for Nonlinear Fractional Differential Equation with Integral Boundary Conditions

Authors: Chuanyun Gu

Abstract:

By using fixed point theorems for a class of generalized concave and convex operators, the positive solution of nonlinear fractional differential equation with integral boundary conditions is studied, where n ≥ 3 is an integer, μ is a parameter and 0 ≤ μ < α. Its existence and uniqueness is proved, and an iterative scheme is constructed to approximate it. Finally, two examples are given to illustrate our results.

Keywords: Fractional differential equation, positive solution, existence and uniqueness, fixed point theorem, generalized concave and convex operator, integral boundary conditions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
516 Three-Dimensional Generalized Thermoelasticity with Variable Thermal Conductivity

Authors: Hamdy M. Youssef, Mowffaq Oreijah, Hunaydi S. Alsharif

Abstract:

In this paper, a three-dimensional model of the generalized thermoelasticity with one relaxation time and variable thermal conductivity has been constructed. The resulting non-dimensional governing equations together with the Laplace and double Fourier transforms techniques have been applied to a three-dimensional half-space subjected to thermal loading with rectangular pulse and traction free in the directions of the principle co-ordinates. The inverses of double Fourier transforms, and Laplace transforms have been obtained numerically. Numerical results for the temperature increment, the invariant stress, the invariant strain, and the displacement are represented graphically. The variability of the thermal conductivity has significant effects on the thermal and the mechanical waves.

Keywords: Thermoelasticity, three-dimensional, Laplace transforms, Fourier transforms, thermal conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686
515 Modeling Bessel Beams and Their Discrete Superpositions from the Generalized Lorenz-Mie Theory to Calculate Optical Forces over Spherical Dielectric Particles

Authors: Leonardo A. Ambrosio, Carlos. H. Silva Santos, Ivan E. L. Rodrigues, Ayumi K. de Campos, Leandro A. Machado

Abstract:

In this work, we propose an algorithm developed under Python language for the modeling of ordinary scalar Bessel beams and their discrete superpositions and subsequent calculation of optical forces exerted over dielectric spherical particles. The mathematical formalism, based on the generalized Lorenz-Mie theory, is implemented in Python for its large number of free mathematical (as SciPy and NumPy), data visualization (Matplotlib and PyJamas) and multiprocessing libraries. We also propose an approach, provided by a synchronized Software as Service (SaaS) in cloud computing, to develop a user interface embedded on a mobile application, thus providing users with the necessary means to easily introduce desired unknowns and parameters and see the graphical outcomes of the simulations right at their mobile devices. Initially proposed as a free Android-based application, such an App enables data post-processing in cloud-based architectures and visualization of results, figures and numerical tables.

Keywords: Bessel Beams and Frozen Waves, Generalized Lorenz-Mie Theory, Numerical Methods, Optical Forces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
514 Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition

Authors: Md. Khademul Islam Molla, Akimasa Sumi, M. Sayedur Rahman

Abstract:

The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.

Keywords: Empirical mode decomposition, instantaneous frequency, Hilbert spectrum, Chi-square distribution, anthropogenic impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
513 Application of a Similarity Measure for Graphs to Web-based Document Structures

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser

Abstract:

Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
512 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
511 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

Authors: Mingren Shi, Michael Renton

Abstract:

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
510 FPGA Implementation of Generalized Maximal Ratio Combining Receiver Diversity

Authors: Rafic Ayoubi, Jean-Pierre Dubois, Rania Minkara

Abstract:

In this paper, we study FPGA implementation of a novel supra-optimal receiver diversity combining technique, generalized maximal ratio combining (GMRC), for wireless transmission over fading channels in SIMO systems. Prior published results using ML-detected GMRC diversity signal driven by BPSK showed superior bit error rate performance to the widely used MRC combining scheme in an imperfect channel estimation (ICE) environment. Under perfect channel estimation conditions, the performance of GMRC and MRC were identical. The main drawback of the GMRC study was that it was theoretical, thus successful FPGA implementation of it using pipeline techniques is needed as a wireless communication test-bed for practical real-life situations. Simulation results showed that the hardware implementation was efficient both in terms of speed and area. Since diversity combining is especially effective in small femto- and picocells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to the hardware of IP-based 4th generation networks.

Keywords: Femto-internet cells, field-programmable gate array, generalized maximal-ratio combining, Lyapunov fractal dimension, pipelining technique, wireless SIMO channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2559
509 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection

Authors: Mondher Yahyaoui

Abstract:

A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.

Keywords: Aileron deflection, camber-surface-bound vortices, classical VLM, Generalized VLM, flap deflection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5003
508 The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

Authors: Yurii Bloshko, Oksana Olar

Abstract:

This paper presents the analysis of six different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

Keywords: Fuzzy petri net, intelligent computational techniques, knowledge representation, triangular norms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 370