Search results for: information/technologies.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4689

Search results for: information/technologies.

1869 Extending the Quantum Entropy to Multidimensional Signal Processing

Authors: Youssef Khmou, Said Safi, Miloud Frikel

Abstract:

This paper treats different aspects of entropy measure in classical information theory and statistical quantum mechanics, it presents the possibility of extending the definition of Von Neumann entropy to image and array processing. In the first part, we generalize the quantum entropy using singular values of arbitrary rectangular matrices to measure the randomness and the quality of denoising operation, this new definition of entropy can be implemented to compare the performance analysis of filtering methods. In the second part, we apply the concept of pure state in quantum formalism to generalize the maximum entropy method for narrowband and farfield source localization problem. Several computer simulation results are illustrated to demonstrate the effectiveness of the proposed techniques.

Keywords: Von Neumann entropy, Filtering, array, DoA, Maximum Entropy Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
1868 An Automatic Gridding and Contour Based Segmentation Approach Applied to DNA Microarray Image Analysis

Authors: Alexandra Oliveros, Miguel Sotaquirá

Abstract:

DNA microarray technology is widely used by geneticists to diagnose or treat diseases through gene expression. This technology is based on the hybridization of a tissue-s DNA sequence into a substrate and the further analysis of the image formed by the thousands of genes in the DNA as green, red or yellow spots. The process of DNA microarray image analysis involves finding the location of the spots and the quantification of the expression level of these. In this paper, a tool to perform DNA microarray image analysis is presented, including a spot addressing method based on the image projections, the spot segmentation through contour based segmentation and the extraction of relevant information due to gene expression.

Keywords: Contour segmentation, DNA microarrays, edge detection, image processing, segmentation, spot addressing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
1867 Software Improvements of the Accuracy in the Air-Electronic Measurement Systems for Geometrical Dimensions

Authors: Miroslav H. Hristov, Velizar A. Vassilev, Georgi K. Dukendjiev

Abstract:

Due to the constant development of measurement systems and the aim for computerization, unavoidable improvements are made for the main disadvantages of air gauges. With the appearance of the air-electronic measuring devices, some of their disadvantages are solved. The output electrical signal allows them to be included in the modern systems for measuring information processing and process management. Producer efforts are aimed at reducing the influence of supply pressure and measurement system setup errors. Increased accuracy requirements and preventive error measures are due to the main uses of air electronic systems - measurement of geometric dimensions in the automotive industry where they are applied as modules in measuring systems to measure geometric parameters, form, orientation and location of the elements.

Keywords: Air-electronic, geometrical parameters, improvement, measurement systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
1866 GRCNN: Graph Recognition Convolutional Neural Network for Synthesizing Programs from Flow Charts

Authors: Lin Cheng, Zijiang Yang

Abstract:

Program synthesis is the task to automatically generate programs based on user specification. In this paper, we present a framework that synthesizes programs from flow charts that serve as accurate and intuitive specification. In order doing so, we propose a deep neural network called GRCNN that recognizes graph structure from its image. GRCNN is trained end-to-end, which can predict edge and node information of the flow chart simultaneously. Experiments show that the accuracy rate to synthesize a program is 66.4%, and the accuracy rates to recognize edge and node are 94.1% and 67.9%, respectively. On average, it takes about 60 milliseconds to synthesize a program.

Keywords: program synthesis, flow chart, specification, graph recognition, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 823
1865 Nonverbal Expression of Emotions in Conflict Escalation

Authors: Arshaluys Mushkambaryan

Abstract:

The purpose of this study is to explore how the emotions at the moment of conflict escalation are expressed nonverbally and how it can be detected by the parties involved in the conflicting situation. The study consists of two parts, in the first part it starts with the definition of "conflict" and "nonverbal communication". Further it includes the analysis of emotions and types of emotions, which may bring to the conflict escalation. Four types of emotions and emotion constructs are analyzed, particularly fear, anger, guilt and frustration. The second part of the study analyses the general role of nonverbal behavior in interaction and communication, which information it may give during communication to the person, who sends or receives those signals. The study finishes with the analysis of the nonverbal expression of analyzed emotions and on how it can be used during interaction.

Keywords: Conflict Escalation, Emotions, Nonverbal communication,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
1864 An Analysis of Activity-Based Costing in a Manufacturing System

Authors: Derya Eren Akyol, Gonca Tuncel, G. Mirac Bayhan

Abstract:

Activity-Based Costing (ABC) represents an alternative paradigm to traditional cost accounting system and it often provides more accurate cost information for decision making such as product pricing, product mix, and make-orbuy decisions. ABC models the causal relationships between products and the resources used in their production and traces the cost of products according to the activities through the use of appropriate cost drivers. In this paper, the implementation of the ABC in a manufacturing system is analyzed and a comparison with the traditional cost based system in terms of the effects on the product costs are carried out to highlight the difference between two costing methodologies. By using this methodology, a valuable insight into the factors that cause the cost is provided, helping to better manage the activities of the company.

Keywords: Activity-based costing, manufacturing systems, product costs, traditional costing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3157
1863 Towards a Competence Management Approach Based on Continuous Improvement

Authors: N. Sefiani, C. Fikri Benbrahim, A. Boumane, K. Reklaoui

Abstract:

Nowadays, the reflection on competence management is the basic for new competitive strategies. It is considered as the core of the problems of the global supply chain. It interact a variety of actors: information, physical and activities flows, etc. Even though competence management is seen as the key factor for any business success, the existing approaches demonstrate the deficiencies and limitations of the competence concept. This research has two objectives: The first is to make a contribution by focusing on the development of a competence approach, based on continuous improvement. It allows the enterprise to spot key competencies, mobilize them in order to serve its strategic objectives and to develop future competencies. The second is to propose a method to evaluate the Level of Collective Competence. The approach was confirmed through an application carried out at an automotive company.

Keywords: Competencies, approach, continuous improvement, collective competence level, performance indicator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
1862 Data Analysis Techniques for Predictive Maintenance on Fleet of Heavy-Duty Vehicles

Authors: Antonis Sideris, Elias Chlis Kalogeropoulos, Konstantia Moirogiorgou

Abstract:

The present study proposes a methodology for the efficient daily management of fleet vehicles and construction machinery. The application covers the area of remote monitoring of heavy-duty vehicles operation parameters, where specific sensor data are stored and examined in order to provide information about the vehicle’s health. The vehicle diagnostics allow the user to inspect whether maintenance tasks need to be performed before a fault occurs. A properly designed machine learning model is proposed for the detection of two different types of faults through classification. Cross validation is used and the accuracy of the trained model is checked with the confusion matrix.

Keywords: Fault detection, feature selection, machine learning, predictive maintenance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
1861 Analysis of DNA Microarray Data using Association Rules: A Selective Study

Authors: M. Anandhavalli Gauthaman

Abstract:

DNA microarrays allow the measurement of expression levels for a large number of genes, perhaps all genes of an organism, within a number of different experimental samples. It is very much important to extract biologically meaningful information from this huge amount of expression data to know the current state of the cell because most cellular processes are regulated by changes in gene expression. Association rule mining techniques are helpful to find association relationship between genes. Numerous association rule mining algorithms have been developed to analyze and associate this huge amount of gene expression data. This paper focuses on some of the popular association rule mining algorithms developed to analyze gene expression data.

Keywords: DNA microarray, gene expression, association rule mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146
1860 A Fiber Optic Interferometric Sensor for Dynamic Measurement

Authors: N. Sathitanon, S. Pullteap

Abstract:

An optical fiber Fabry-Perot interferometer (FFPI) is proposed and demonstrated for dynamic measurements in a mechanical vibrating target. A polishing metal with a low reflectance value adhered to a mechanical vibrator was excited via a function generator at various excitation frequencies. Output interference fringes were generated by modulating the reference and sensing signal at the output arm. A fringe-counting technique was used for interpreting the displacement information on the dedicated computer. The fiber interferometer has been found the capability of the displacement measurements of 1.28 μm – 96.01 μm. A commercial displacement sensor was employed as a reference sensor for investigating the measurement errors from the fiber sensor. A maximum percentage measurement error of approximately 1.59 % was obtained.

Keywords: Optical fiber sensors, dynamic displacement, fringe counting, reference displacement sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2240
1859 The Benefits of IFRS Adoption – A Survey of Chief Financial Officers of Romanian Listed Companies

Authors: Lucian Munteanu

Abstract:

The move towards internationalization of accounting encountered a great boost, when in 2002 EU delegated the IASB to provide the accounting standards to be applied inside its frontiers. Among the incentives of the standardization of accounting on the international level, is the reduction of the cost of capital. Romania made the move towards IFRS before EU, when the country was not yet a member of it. Even if this made Romania a special case, it was scarcely approached. The leak of real data is usually the reason for avoiding. The novelty of this paper is that it offers an insight from the reality of Romanian companies and their view regarding the IFRS. The paper is based on a survey that the authors made among the companies listed on the first two tiers of the Bucharest Stock Exchange (BSE), which are basically, the most important companies in the country.

Keywords: Cost of capital, IFRS, information asymmetry, transparency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2681
1858 Methodology for Obtaining Static Alignment Model

Authors: Lely A. Luengas, Pedro R. Vizcaya, Giovanni Sánchez

Abstract:

In this paper, a methodology is presented to obtain the Static Alignment Model for any transtibial amputee person. The proposed methodology starts from experimental data collected on the Hospital Militar Central, Bogotá, Colombia. The effects of transtibial prosthesis malalignment on amputees were measured in terms of joint angles, center of pressure (COP) and weight distribution. Some statistical tools are used to obtain the model parameters. Mathematical predictive models of prosthetic alignment were created. The proposed models are validated in amputees and finding promising results for the prosthesis Static Alignment. Static alignment process is unique to each subject; nevertheless the proposed methodology can be used in each transtibial amputee.

Keywords: Information theory, prediction model, prosthetic alignment, transtibial prosthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937
1857 An Improved QRS Complex Detection for Online Medical Diagnosis

Authors: I. L. Ahmad, M. Mohamed, N. A. Ab. Ghani

Abstract:

This paper presents the work of signal discrimination specifically for Electrocardiogram (ECG) waveform. ECG signal is comprised of P, QRS, and T waves in each normal heart beat to describe the pattern of heart rhythms corresponds to a specific individual. Further medical diagnosis could be done to determine any heart related disease using ECG information. The emphasis on QRS Complex classification is further discussed to illustrate the importance of it. Pan-Tompkins Algorithm, a widely known technique has been adapted to realize the QRS Complex classification process. There are eight steps involved namely sampling, normalization, low pass filter, high pass filter (build a band pass filter), derivation, squaring, averaging and lastly is the QRS detection. The simulation results obtained is represented in a Graphical User Interface (GUI) developed using MATLAB.

Keywords: ECG, Pan Tompkins Algorithm, QRS Complex, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2574
1856 Transonic Flutter Analysis Using Euler Equation and Reduced Order Modeling Technique

Authors: D. H. Kim, Y. H. Kim, T. Kim

Abstract:

A new method identifies coupled fluid-structure system with a reduced set of state variables is presented. Assuming that the structural model is known a priori either from an analysis or a test and using linear transformations between structural and aeroelastic states, it is possible to deduce aerodynamic information from sampled time histories of the aeroelastic system. More specifically given a finite set of structural modes the method extracts generalized aerodynamic force matrix corresponding to these mode shapes. Once the aerodynamic forces are known, an aeroelastic reduced-order model can be constructed in discrete-time, state-space format by coupling the structural model and the aerodynamic system. The resulting reduced-order model is suitable for constant Mach, varying density analysis.

Keywords: ROM (Reduced-Order Model), aero elasticity, AGARD 445.6 wing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2589
1855 Peer Corrective Feedback on Written Errors in Computer-Mediated Communication

Authors: S. H. J. Liu

Abstract:

This paper aims to explore the role of peer Corrective Feedback (CF) in improving written productions by English-as-a- foreign-language (EFL) learners who work together via Wikispaces. It attempted to determine the effect of peer CF on form accuracy in English, such as grammar and lexis. Thirty-four EFL learners at the tertiary level were randomly assigned into the experimental (with peer feedback) or the control (without peer feedback) group; each group was subdivided into small groups of two or three. This resulted in six and seven small groups in the experimental and control groups, respectively. In the experimental group, each learner played a role as an assessor (providing feedback to others), as well as an assessee (receiving feedback from others). Each participant was asked to compose his/her written work and revise it based on the feedback. In the control group, on the other hand, learners neither provided nor received feedback but composed and revised their written work on their own. Data collected from learners’ compositions and post-task interviews were analyzed and reported in this study. Following the completeness of three writing tasks, 10 participants were selected and interviewed individually regarding their perception of collaborative learning in the Computer-Mediated Communication (CMC) environment. Language aspects to be analyzed included lexis (e.g., appropriate use of words), verb tenses (e.g., present and past simple), prepositions (e.g., in, on, and between), nouns, and articles (e.g., a/an). Feedback types consisted of CF, affective, suggestive, and didactic. Frequencies of feedback types and the accuracy of the language aspects were calculated. The results first suggested that accurate items were found more in the experimental group than in the control group. Such results entail that those who worked collaboratively outperformed those who worked non-collaboratively on the accuracy of linguistic aspects. Furthermore, the first type of CF (e.g., corrections directly related to linguistic errors) was found to be the most frequently employed type, whereas affective and didactic were the least used by the experimental group. The results further indicated that most participants perceived that peer CF was helpful in improving the language accuracy, and they demonstrated a favorable attitude toward working with others in the CMC environment. Moreover, some participants stated that when they provided feedback to their peers, they tended to pay attention to linguistic errors in their peers’ work but overlook their own errors (e.g., past simple tense) when writing. Finally, L2 or FL teachers or practitioners are encouraged to employ CMC technologies to train their students to give each other feedback in writing to improve the accuracy of the language and to motivate them to attend to the language system.

Keywords: Peer corrective feedback, computer-mediated communication, second or foreign language learning, Wikispaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
1854 Principal Component Analysis using Singular Value Decomposition of Microarray Data

Authors: Dong Hoon Lim

Abstract:

A series of microarray experiments produces observations of differential expression for thousands of genes across multiple conditions. Principal component analysis(PCA) has been widely used in multivariate data analysis to reduce the dimensionality of the data in order to simplify subsequent analysis and allow for summarization of the data in a parsimonious manner. PCA, which can be implemented via a singular value decomposition(SVD), is useful for analysis of microarray data. For application of PCA using SVD we use the DNA microarray data for the small round blue cell tumors(SRBCT) of childhood by Khan et al.(2001). To decide the number of components which account for sufficient amount of information we draw scree plot. Biplot, a graphic display associated with PCA, reveals important features that exhibit relationship between variables and also the relationship of variables with observations.

Keywords: Principal component analysis, singular value decomposition, microarray data, SRBCT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3250
1853 Predicting and Mitigating Dredging DispersionImpact: A Case of Phuket Port, Thailand

Authors: Cherdvong Saengsupavanich

Abstract:

Dredging activities inevitably cause sediment dispersion. In certain locations, where there are important ecological areas such as mangroves or coral reefs, carefully planning the dredging can significantly reduce negative impacts. This article utilizes the dredging at Phuket port, Thailand, as a case study to demonstrate how computer simulations can be helpful to protect existing coral reefs. A software package named MIKE21 was applied. Necessary information required by the simulations was gathered. After calibrating and verifying the model, various dredging scenario were simulated to predict spoil movement. The simulation results were used as guidance to setting up an environmental measure. Finally, the recommendation to dredge during flood tide with silt curtains installed was made.

Keywords: Coastal simulation, Dredging, Environmentalprotection, Port. Coastal engineering, Thailand

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034
1852 Online Metacognitive Reading Strategies Use by Postgraduate Libyan EFL Students

Authors: Najwa Alsayed Omar

Abstract:

With the increasing popularity of the Internet, online reading has become an essential source for EFL readers. Using strategies to comprehend information on online reading texts play a crucial role in students’ academic success. Metacognitive reading strategies are effective factors that enhance EFL learners reading comprehension. This study aimed at exploring the use of online metacognitive reading strategies by postgraduate Libyan EFL students. Quantitative data was collected using the Survey of Online Reading Strategies (OSORS). The findings revealed that the participants were moderate users of metacognitive online reading strategies. Problem solving strategies were the most frequently reported used strategies, while support reading strategies were the least. The five most and least frequently reported strategies were identified. Based on the findings, some future research recommendations were presented.

Keywords: Metacognitive strategies, Online reading, Online reading strategies, Postgraduate students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3104
1851 Evolutionary Feature Selection for Text Documents using the SVM

Authors: Daniel I. Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, we present three feature selection methods: Information Gain, Support Vector Machine feature selection called (SVM_FS) and Genetic Algorithm with SVM (called GA_SVM). We show that the best results were obtained with GA_SVM method for a relatively small dimension of the feature vector.

Keywords: Feature Selection, Learning with Kernels, Support Vector Machine, Genetic Algorithm, and Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
1850 Development of Subjective Measures of Interestingness: From Unexpectedness to Shocking

Authors: Eiad Yafi, M. A. Alam, Ranjit Biswas

Abstract:

Knowledge Discovery of Databases (KDD) is the process of extracting previously unknown but useful and significant information from large massive volume of databases. Data Mining is a stage in the entire process of KDD which applies an algorithm to extract interesting patterns. Usually, such algorithms generate huge volume of patterns. These patterns have to be evaluated by using interestingness measures to reflect the user requirements. Interestingness is defined in different ways, (i) Objective measures (ii) Subjective measures. Objective measures such as support and confidence extract meaningful patterns based on the structure of the patterns, while subjective measures such as unexpectedness and novelty reflect the user perspective. In this report, we try to brief the more widely spread and successful subjective measures and propose a new subjective measure of interestingness, i.e. shocking.

Keywords: Shocking rules (SHR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
1849 Morpho-Anatomical and Ecological Studies on Endemic of Fritillaria oranensis Pomel. from the Mounts of Tessala (Western Algeria)

Authors: A. Bouzid, R. Chadli

Abstract:

Fritillaria oranensis (Liliaceae) was described in 1874 by pomel from Algeria. Plant samples have been collected from the mount of Tessala (Sidi-Bel-Abbes). The morphological features of various organs of the plant are described in detail. In the morphological part of the study, features of various organs of the plants such as stem and leaf were determined and illustrated. Ecological studies provide information about the physical and chemical structure of soil types in Tessala Mountain. The aim of this original investigation is to put forth ecological and anatomical features of these species for the first time, but at the same time given detailed account of the morphological characteristics of the stem and leaf of Fritillaria oranensis.

Keywords: Anatomy, ecology, Liliaceae, morphology, Fritillaria oranensis Pomel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
1848 Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
1847 Mind Your Product-Market Strategy on Selecting Marketing Inputs: An Uncertainty Approach in Indian Context

Authors: Susmita Ghosh, Bhaskar Bhowmick

Abstract:

Market is an important factor for start-ups to look into during decision-making in product development and related areas. Emerging country markets are more uncertain in terms of information availability and institutional supports. The literature review of market uncertainty reveals the need for identifying factors representing the market uncertainty. This paper identifies factors for market uncertainty using Exploratory Factor Analysis (EFA) and confirmed the number of factor retention using an alternative factor retention criterion ‘Parallel Analysis’. 500 entrepreneurs, engaged in start-ups from all over India participated in the study. This paper concludes with the factor structure of ‘market uncertainty’ having dimensions of uncertainty in industry orientation, uncertainty in customer orientation and uncertainty in marketing orientation.

Keywords: Uncertainty, market, orientation, competitor, demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
1846 Detecting Abnormal ECG Signals Utilising Wavelet Transform and Standard Deviation

Authors: Dejan Stantic, Jun Jo

Abstract:

ECG contains very important clinical information about the cardiac activities of the heart. Often the ECG signal needs to be captured for a long period of time in order to identify abnormalities in certain situations. Such signal apart of a large volume often is characterised by low quality due to the noise and other influences. In order to extract features in the ECG signal with time-varying characteristics at first need to be preprocessed with the best parameters. Also, it is useful to identify specific parts of the long lasting signal which have certain abnormalities and to direct the practitioner to those parts of the signal. In this work we present a method based on wavelet transform, standard deviation and variable threshold which achieves 100% accuracy in identifying the ECG signal peaks and heartbeat as well as identifying the standard deviation, providing a quick reference to abnormalities.

Keywords: Electrocardiogram-ECG, Arrhythmia, Signal Processing, Wavelet Transform, Standard Deviation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2909
1845 Color and Layout-based Identification of Documents Captured from Handheld Devices

Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold

Abstract:

This paper proposes a method, combining color and layout features, for identifying documents captured from low-resolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. Our identification method first uses the color information in the documents in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining of the search space.

Keywords: Document color modeling, document visualsignature, kernel density estimation, document identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
1844 Analysis of Sonogram Images of Thyroid Gland Based on Wavelet Transform

Authors: M. Bastanfard, B. Jalaeian, S. Jafari

Abstract:

Sonogram images of normal and lymphocyte thyroid tissues have considerable overlap which makes it difficult to interpret and distinguish. Classification from sonogram images of thyroid gland is tackled in semiautomatic way. While making manual diagnosis from images, some relevant information need not to be recognized by human visual system. Quantitative image analysis could be helpful to manual diagnostic process so far done by physician. Two classes are considered: normal tissue and chronic lymphocyte thyroid (Hashimoto's Thyroid). Data structure is analyzed using K-nearest-neighbors classification. This paper is mentioned that unlike the wavelet sub bands' energy, histograms and Haralick features are not appropriate to distinguish between normal tissue and Hashimoto's thyroid.

Keywords: Sonogram, thyroid, Haralick feature, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323
1843 Automatic Fingerprint Classification Using Graph Theory

Authors: Mana Tarjoman, Shaghayegh Zarei

Abstract:

Using efficient classification methods is necessary for automatic fingerprint recognition system. This paper introduces a new structural approach to fingerprint classification by using the directional image of fingerprints to increase the number of subclasses. In this method, the directional image of fingerprints is segmented into regions consisting of pixels with the same direction. Afterwards the relational graph to the segmented image is constructed and according to it, the super graph including prominent information of this graph is formed. Ultimately we apply a matching technique to compare obtained graph with the model graphs in order to classify fingerprints by using cost function. Increasing the number of subclasses with acceptable accuracy in classification and faster processing in fingerprints recognition, makes this system superior.

Keywords: Classification, Directional image, Fingerprint, Graph, Super graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3634
1842 An Implicit Region-Based Deformable Model with Local Segmentation Applied to Weld Defects Extraction

Authors: Y. Boutiche, N. Ramou, M. Ben Gharsallah

Abstract:

This paper is devoted to present and discuss a model that allows a local segmentation by using statistical information of a given image. It is based on Chan-Vese model, curve evolution, partial differential equations and binary level sets method. The proposed model uses the piecewise constant approximation of Chan-Vese model to compute Signed Pressure Force (SPF) function, this one attracts the curve to the true object(s)-s boundaries. The implemented model is used to extract weld defects from weld radiographic images in the aim to calculate the perimeter and surfaces of those weld defects; encouraged resultants are obtained on synthetic and real radiographic images.

Keywords: Active contour, Chan-Vese Model, local segmentation, weld radiographic images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
1841 Instance-Based Ontology Matching Using Different Kinds of Formalism

Authors: Katrin Zaiß, Tim Schlüter, Stefan Conrad

Abstract:

Ontology Matching is a task needed in various applica-tions, for example for comparison or merging purposes. In literature,many algorithms solving the matching problem can be found, butmost of them do not consider instances at all. Mappings are deter-mined by calculating the string-similarity of labels, by recognizinglinguistic word relations (synonyms, subsumptions etc.) or by ana-lyzing the (graph) structure. Due to the facts that instances are oftenmodeled within the ontology and that the set of instances describesthe meaning of the concepts better than their meta information,instances should definitely be incorporated into the matching process.In this paper several novel instance-based matching algorithms arepresented which enhance the quality of matching results obtainedwith common concept-based methods. Different kinds of formalismsare use to classify concepts on account of their instances and finallyto compare the concepts directly.KeywordsInstances, Ontology Matching, Semantic Web

Keywords: Instances, Ontology Matching, Semantic Web

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
1840 A New Measure of Herding Behavior: Derivation and Implications

Authors: Amina Amirat, Abdelfettah Bouri

Abstract:

If price and quantity are the fundamental building blocks of any theory of market interactions, the importance of trading volume in understanding the behavior of financial markets is clear. However, while many economic models of financial markets have been developed to explain the behavior of prices -predictability, variability, and information content- far less attention has been devoted to explaining the behavior of trading volume. In this article, we hope to expand our understanding of trading volume by developing a new measure of herding behavior based on a cross sectional dispersion of volumes betas. We apply our measure to the Toronto stock exchange using monthly data from January 2000 to December 2002. Our findings show that the herd phenomenon consists of three essential components: stationary herding, intentional herding and the feedback herding.

Keywords: Herding behavior, market return, trading volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2301