Search results for: frequency-variation based method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15951

Search results for: frequency-variation based method.

15561 Recovering the Boundary Data in the Two Dimensional Inverse Heat Conduction Problem Using the Ritz-Galerkin Method

Authors: Saeed Sarabadan, Kamal Rashedi

Abstract:

This article presents a numerical method to find the heat flux in an inhomogeneous inverse heat conduction problem with linear boundary conditions and an extra specification at the terminal. The method is based upon applying the satisfier function along with the Ritz-Galerkin technique to reduce the approximate solution of the inverse problem to the solution of a system of algebraic equations. The instability of the problem is resolved by taking advantage of the Landweber’s iterations as an admissible regularization strategy. In computations, we find the stable and low-cost results which demonstrate the efficiency of the technique.

Keywords: Inverse problem, parabolic equations, heat equation, Ritz-Galerkin method, Landweber iterations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
15560 Extending Global Full Orthogonalization method for Solving the Matrix Equation AXB=F

Authors: Fatemeh Panjeh Ali Beik

Abstract:

In the present work, we propose a new method for solving the matrix equation AXB=F . The new method can be considered as a generalized form of the well-known global full orthogonalization method (Gl-FOM) for solving multiple linear systems. Hence, the method will be called extended Gl-FOM (EGl- FOM). For implementing EGl-FOM, generalized forms of block Krylov subspace and global Arnoldi process are presented. Finally, some numerical experiments are given to illustrate the efficiency of our new method.

Keywords: Matrix equations, Iterative methods, Block Krylovsubspace methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
15559 Design and Analysis of Gauge R&R Studies: Making Decisions Based on ANOVA Method

Authors: Afrooz Moatari Kazerouni

Abstract:

In a competitive production environment, critical decision making are based on data resulted by random sampling of product units. Efficiency of these decisions depends on data quality and also their reliability scale. This point leads to the necessity of a reliable measurement system. Therefore, the conjecture process and analysing the errors contributes to a measurement system known as Measurement System Analysis (MSA). The aim of this research is on determining the necessity and assurance of extensive development in analysing measurement systems, particularly with the use of Repeatability and Reproducibility Gages (GR&R) to improve physical measurements. Nowadays in productive industries, repeatability and reproducibility gages released so well but they are not applicable as well as other measurement system analysis methods. To get familiar with this method and gain a feedback in improving measurement systems, this survey would be on “ANOVA" method as the most widespread way of calculating Repeatability and Reproducibility (R&R).

Keywords: Analysis of Variance (ANOVA), MeasurementSystem Analysis (MSA), Part-Operator interaction effect, Repeatability and Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4630
15558 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
15557 An Optimization of the New Die Design of Sheet Hydroforming by Taguchi Method

Authors: M. Hosseinzadeh, S. A. Zamani, A. Taheri

Abstract:

During the last few years, several sheet hydroforming processes have been introduced. Despite the advantages of these methods, they have some limitations. Of the processes, the two main ones are the standard hydroforming and hydromechanical deep drawing. A new sheet hydroforming die set was proposed that has the advantages of both processes and eliminates their limitations. In this method, a polyurethane plate was used as a part of the die-set to control the blank holder force. This paper outlines the Taguchi optimization methodology, which is applied to optimize the effective parameters in forming cylindrical cups by the new die set of sheet hydroforming process. The process parameters evaluated in this research are polyurethane hardness, polyurethane thickness, forming pressure path and polyurethane hole diameter. The design of experiments based upon L9 orthogonal arrays by Taguchi was used and analysis of variance (ANOVA) was employed to analyze the effect of these parameters on the forming pressure. The analysis of the results showed that the optimal combination for low forming pressure is harder polyurethane, bigger diameter of polyurethane hole and thinner polyurethane. Finally, the confirmation test was derived based on the optimal combination of parameters and it was shown that the Taguchi method is suitable to examine the optimization process.

Keywords: Sheet Hydroforming, Optimization, Taguchi Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2563
15556 Design of Compliant Mechanism Based Microgripper with Three Finger Using Topology Optimization

Authors: R. Bharanidaran, B. T. Ramesh

Abstract:

High precision in motion is required to manipulate the micro objects in precision industries for micro assembly, cell manipulation etc. Precision manipulation is achieved based on the appropriate mechanism design of micro devices such as microgrippers. Design of a compliant based mechanism is the better option to achieve a highly precised and controlled motion. This research article highlights the method of designing a compliant based three fingered microgripper suitable for holding asymmetric objects. Topological optimization technique, a systematic method is implemented in this research work to arrive a topologically optimized design of the mechanism needed to perform the required micro motion of the gripper. Optimization technique has a drawback of generating senseless regions such as node to node connectivity and staircase effect at the boundaries. Hence, it is required to have post processing of the design to make it manufacturable. To reduce the effect of post processing stage and to preserve the edges of the image, a cubic spline interpolation technique is introduced in the MATLAB program. Structural performance of the topologically developed mechanism design is tested using finite element method (FEM) software. Further the microgripper structure is examined to find its fatigue life and vibration characteristics.

Keywords: Compliant mechanism, Cubic spline interpolation, FEM, Topology optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3547
15555 Prioritization Method in the Fuzzy Analytic Network Process by Fuzzy Preferences Programming Method

Authors: Tarifa S. Almulhim, Ludmil Mikhailov, Dong-Ling Xu

Abstract:

In this paper, a method for deriving a group priority vector in the Fuzzy Analytic Network Process (FANP) is proposed. By introducing importance weights of multiple decision makers (DMs) based on their experiences, the Fuzzy Preferences Programming Method (FPP) is extended to a fuzzy group prioritization problem in the FANP. Additionally, fuzzy pair-wise comparison judgments are presented rather than exact numerical assessments in order to model the uncertainty and imprecision in the DMs- judgments and then transform the fuzzy group prioritization problem into a fuzzy non-linear programming optimization problem which maximize the group satisfaction. Unlike the known fuzzy prioritization techniques, the new method proposed in this paper can easily derive crisp weights from incomplete and inconsistency fuzzy set of comparison judgments and does not require additional aggregation producers. Detailed numerical examples are used to illustrate the implement of our approach and compare with the latest fuzzy prioritization method.

Keywords: Fuzzy Analytic Network Process (FANP), Fuzzy Non-linear Programming, Fuzzy Preferences Programming Method (FPP), Multiple Criteria Decision-Making (MCDM), Triangular Fuzzy Number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2353
15554 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern

Authors: Rupesh K. Gopal, Saroj K. Meher

Abstract:

In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.

Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2779
15553 Frequency Modulation in Vibro-Acoustic Modulation Method

Authors: D. Liu, D. M. Donskoy

Abstract:

The vibroacoustic modulation method is based on the modulation effect of high-frequency ultrasonic wave (carrier) by low-frequency vibration in the presence of various defects, primarily contact-type such as cracks, delamination, etc. The presence and severity of the defect are measured by the ratio of the spectral sidebands and the carrier in the spectrum of the modulated signal. This approach, however, does not differentiate between amplitude and frequency modulations, AM and FM, respectfully. This paper is an attempt to explain the generation mechanisms of FM and its correlation with the flaw properties. Here we proposed two possible mechanisms leading to FM modulation based on nonlinear local defect resonance and dynamic acoustoelastic models.

Keywords: Non-destructive testing, nonlinear acoustics, structural health monitoring, acoustoelasticity, local defect resonance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 422
15552 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: Degree, initial cluster center, k-means, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
15551 Power System Security Assessment using Binary SVM Based Pattern Recognition

Authors: S Kalyani, K Shanti Swarup

Abstract:

Power System Security is a major concern in real time operation. Conventional method of security evaluation consists of performing continuous load flow and transient stability studies by simulation program. This is highly time consuming and infeasible for on-line application. Pattern Recognition (PR) is a promising tool for on-line security evaluation. This paper proposes a Support Vector Machine (SVM) based binary classification for static and transient security evaluation. The proposed SVM based PR approach is implemented on New England 39 Bus and IEEE 57 Bus systems. The simulation results of SVM classifier is compared with the other classifier algorithms like Method of Least Squares (MLS), Multi- Layer Perceptron (MLP) and Linear Discriminant Analysis (LDA) classifiers.

Keywords: Static Security, Transient Security, Pattern Recognition, Classifier, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
15550 Application of He’s Parameter-Expansion Method to a Coupled Van Der Pol oscillators with Two Kinds of Time-delay Coupling

Authors: Mohammad Taghi Darvishi, Samad Kheybari

Abstract:

In this paper, the dynamics of a system of two van der Pol oscillators with delayed position and velocity is studied. We provide an approximate solution for this system using parameterexpansion method. Also, we obtain approximate values for frequencies of the system. The parameter-expansion method is more efficient than the perturbation method for this system because the method is independent of perturbation parameter assumption.

Keywords: Parameter-expansion method, coupled van der pol oscillator, time-delay system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
15549 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features

Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim

Abstract:

The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.

Keywords: Data mining, Korean linguistic feature, literary fiction, relationship extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
15548 Spike Sorting Method Using Exponential Autoregressive Modeling of Action Potentials

Authors: Sajjad Farashi

Abstract:

Neurons in the nervous system communicate with each other by producing electrical signals called spikes. To investigate the physiological function of nervous system it is essential to study the activity of neurons by detecting and sorting spikes in the recorded signal. In this paper a method is proposed for considering the spike sorting problem which is based on the nonlinear modeling of spikes using exponential autoregressive model. The genetic algorithm is utilized for model parameter estimation. In this regard some selected model coefficients are used as features for sorting purposes. For optimal selection of model coefficients, self-organizing feature map is used. The results show that modeling of spikes with nonlinear autoregressive model outperforms its linear counterpart. Also the extracted features based on the coefficients of exponential autoregressive model are better than wavelet based extracted features and get more compact and well-separated clusters. In the case of spikes different in small-scale structures where principal component analysis fails to get separated clouds in the feature space, the proposed method can obtain well-separated cluster which removes the necessity of applying complex classifiers.

Keywords: Exponential autoregressive model, Neural data, spike sorting, time series modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
15547 Mathematical Reconstruction of an Object Image Using X-Ray Interferometric Fourier Holography Method

Authors: M. K. Balyan

Abstract:

The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented – method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method.

Keywords: Dynamical diffraction, hologram, object image, X-ray holography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385
15546 DIFFER: A Propositionalization approach for Learning from Structured Data

Authors: Thashmee Karunaratne, Henrik Böstrom

Abstract:

Logic based methods for learning from structured data is limited w.r.t. handling large search spaces, preventing large-sized substructures from being considered by the resulting classifiers. A novel approach to learning from structured data is introduced that employs a structure transformation method, called finger printing, for addressing these limitations. The method, which generates features corresponding to arbitrarily complex substructures, is implemented in a system, called DIFFER. The method is demonstrated to perform comparably to an existing state-of-art method on some benchmark data sets without requiring restrictions on the search space. Furthermore, learning from the union of features generated by finger printing and the previous method outperforms learning from each individual set of features on all benchmark data sets, demonstrating the benefit of developing complementary, rather than competing, methods for structure classification.

Keywords: Machine learning, Structure classification, Propositionalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1192
15545 Extraction of Data from Web Pages: A Vision Based Approach

Authors: P. S. Hiremath, Siddu P. Algur

Abstract:

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Keywords: Web data records, web data regions, web mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872
15544 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method

Authors: Atilla Bayram

Abstract:

This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.

Keywords: Computed force control method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
15543 A Family Cars- Life Cycle Cost (LCC)-Oriented Hybrid Modelling Approach Combining ANN and CBR

Authors: Xiaochuan Chen, Jianguo Yang, Beizhi Li

Abstract:

Design for cost (DFC) is a method that reduces life cycle cost (LCC) from the angle of designers. Multiple domain features mapping (MDFM) methodology was given in DFC. Using MDFM, we can use design features to estimate the LCC. From the angle of DFC, the design features of family cars were obtained, such as all dimensions, engine power and emission volume. At the conceptual design stage, cars- LCC were estimated using back propagation (BP) artificial neural networks (ANN) method and case-based reasoning (CBR). Hamming space was used to measure the similarity among cases in CBR method. Levenberg-Marquardt (LM) algorithm and genetic algorithm (GA) were used in ANN. The differences of LCC estimation model between CBR and artificial neural networks (ANN) were provided. ANN and CBR separately each method has its shortcomings. By combining ANN and CBR improved results accuracy was obtained. Firstly, using ANN selected some design features that affect LCC. Then using LCC estimation results of ANN could raise the accuracy of LCC estimation in CBR method. Thirdly, using ANN estimate LCC errors and correct errors in CBR-s estimation results if the accuracy is not enough accurate. Finally, economically family cars and sport utility vehicle (SUV) was given as LCC estimation cases using this hybrid approach combining ANN and CBR.

Keywords: case-based reasoning, life cycle cost (LCC), artificialneural networks (ANN), family cars

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
15542 Scale-Space Volume Descriptors for Automatic 3D Facial Feature Extraction

Authors: Daniel Chen, George Mamic, Clinton Fookes, Sridha Sridharan

Abstract:

An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.

Keywords: Scale space volume descriptor, feature extraction, 3D facial landmarking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
15541 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3121
15540 An Iterative Method for the Least-squares Symmetric Solution of AXB+CYD=F and its Application

Authors: Minghui Wang

Abstract:

Based on the classical algorithm LSQR for solving (unconstrained) LS problem, an iterative method is proposed for the least-squares like-minimum-norm symmetric solution of AXB+CYD=E. As the application of this algorithm, an iterative method for the least-squares like-minimum-norm biymmetric solution of AXB=E is also obtained. Numerical results are reported that show the efficiency of the proposed methods.

Keywords: Matrix equation, bisymmetric matrix, least squares problem, like-minimum norm, iterative algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
15539 3D Objects Indexing with a Direct and Analytical Method for Calculating the Spherical Harmonics Coefficients

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a new method for threedimensional object indexing based on D.A.M.C-S.H.C descriptor (Direct and Analytical Method for Calculating the Spherical Harmonics Coefficients). For this end, we propose a direct calculation of the coefficients of spherical harmonics with perfect precision. The aims of the method are to minimize, the processing time on the 3D objects database and the searching time of similar objects to a request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be tested and prove his efficiency in the search for similar objects in the database in which we have objects with very various and important size.

Keywords: 3D Object indexing, 3D shape descriptor, spherical harmonic, 3D Object similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
15538 Performance Analysis of Genetic Algorithm with kNN and SVM for Feature Selection in Tumor Classification

Authors: C. Gunavathi, K. Premalatha

Abstract:

Tumor classification is a key area of research in the field of bioinformatics. Microarray technology is commonly used in the study of disease diagnosis using gene expression levels. The main drawback of gene expression data is that it contains thousands of genes and a very few samples. Feature selection methods are used to select the informative genes from the microarray. These methods considerably improve the classification accuracy. In the proposed method, Genetic Algorithm (GA) is used for effective feature selection. Informative genes are identified based on the T-Statistics, Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate solutions of GA are obtained from top-m informative genes. The classification accuracy of k-Nearest Neighbor (kNN) method is used as the fitness function for GA. In this work, kNN and Support Vector Machine (SVM) are used as the classifiers. The experimental results show that the proposed work is suitable for effective feature selection. With the help of the selected genes, GA-kNN method achieves 100% accuracy in 4 datasets and GA-SVM method achieves in 5 out of 10 datasets. The GA with kNN and SVM methods are demonstrated to be an accurate method for microarray based tumor classification.

Keywords: F-Test, Gene Expression, Genetic Algorithm, k- Nearest-Neighbor, Microarray, Signal-to-Noise Ratio, Support Vector Machine, T-statistics, Tumor Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4498
15537 Novel Approach to Design of a Class-EJ Power Amplifier Using High Power Technology

Authors: F. Rahmani, F. Razaghian, A. R. Kashaninia

Abstract:

This article proposes a new method for application in communication circuit systems that increase efficiency, PAE, output power and gain in the circuit. The proposed method is based on a combination of switching class-E and class-J and has been termed class-EJ. This method was investigated using both theory and simulation to confirm ∼72% PAE and output power of >39dBm. The combination and design of the proposed power amplifier accrues gain of over 15dB in the 2.9 to 3.5GHz frequency bandwidth. This circuit was designed using MOSFET and high power transistors. The loadand source-pull method achieved the best input and output networks using lumped elements. The proposed technique was investigated for fundamental and second harmonics having desirable amplitudes for the output signal.

Keywords: Power Amplifier (PA), GaN HEMT, Class-J and Class-E, High Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2319
15536 Source Direction Detection based on Stationary Electronic Nose System

Authors: Jie Cai, David C. Levy

Abstract:

Electronic nose (array of chemical sensors) are widely used in food industry and pollution control. Also it could be used to locate or detect the direction of the source of emission odors. Usually this task is performed by electronic nose (ENose) cooperated with mobile vehicles, but when a source is instantaneous or surrounding is hard for vehicles to reach, problem occurs. Thus a method for stationary ENose to detect the direction of the source and locate the source will be required. A novel method which uses the ratio between the responses of different sensors as a discriminant to determine the direction of source in natural wind surroundings is presented in this paper. The result shows that the method is accurate and easily to be implemented. This method could be also used in movably, as an optimized algorithm for robot tracking source location.

Keywords: Electronic nose, Nature wind situation, Source direction detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
15535 Sequence-based Prediction of Gamma-turn Types using a Physicochemical Property-based Decision Tree Method

Authors: Chyn Liaw, Chun-Wei Tung, Shinn-Jang Ho, Shinn-Ying Ho

Abstract:

The γ-turns play important roles in protein folding and molecular recognition. The prediction and analysis of γ-turn types are important for both protein structure predictions and better understanding the characteristics of different γ-turn types. This study proposed a physicochemical property-based decision tree (PPDT) method to interpretably predict γ-turn types. In addition to the good prediction performance of PPDT, three simple and human interpretable IF-THEN rules are extracted from the decision tree constructed by PPDT. The identified informative physicochemical properties and concise rules provide a simple way for discriminating and understanding γ-turn types.

Keywords: Classification and regression tree (CART), γ-turn, Physicochemical properties, Protein secondary structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
15534 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651
15533 Application of Smooth Ergodic Hidden Markov Model in Text to Speech Systems

Authors: Armin Ghayoori, Faramarz Hendessi, Asrar Sheikh

Abstract:

In developing a text-to-speech system, it is well known that the accuracy of information extracted from a text is crucial to produce high quality synthesized speech. In this paper, a new scheme for converting text into its equivalent phonetic spelling is introduced and developed. This method is applicable to many applications in text to speech converting systems and has many advantages over other methods. The proposed method can also complement the other methods with a purpose of improving their performance. The proposed method is a probabilistic model and is based on Smooth Ergodic Hidden Markov Model. This model can be considered as an extension to HMM. The proposed method is applied to Persian language and its accuracy in converting text to speech phonetics is evaluated using simulations.

Keywords: Hidden Markov Models, text, synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
15532 Generalized Method for Estimating Best-Fit Vertical Alignments for Profile Data

Authors: Said M. Easa, Shinya Kikuchi

Abstract:

When the profile information of an existing road is missing or not up-to-date and the parameters of the vertical alignment are needed for engineering analysis, the engineer has to recreate the geometric design features of the road alignment using collected profile data. The profile data may be collected using traditional surveying methods, global positioning systems, or digital imagery. This paper develops a method that estimates the parameters of the geometric features that best characterize the existing vertical alignments in terms of tangents and the expressions of the curve, that may be symmetrical, asymmetrical, reverse, and complex vertical curves. The method is implemented using an Excel-based optimization method that minimizes the differences between the observed profile and the profiles estimated from the equations of the vertical curve. The method uses a 'wireframe' representation of the profile that makes the proposed method applicable to all types of vertical curves. A secondary contribution of this paper is to introduce the properties of the equal-arc asymmetrical curve that has been recently developed in the highway geometric design field.

Keywords: Optimization, parameters, data, reverse, spreadsheet, vertical curves

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2407