Search results for: extracting%20numerals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 204

Search results for: extracting%20numerals

114 Tool for Metadata Extraction and Content Packaging as Endorsed in OAIS Framework

Authors: Payal Abichandani, Rishi Prakash, Paras Nath Barwal, B. K. Murthy

Abstract:

Information generated from various computerization processes is a potential rich source of knowledge for its designated community. To pass this information from generation to generation without modifying the meaning is a challenging activity. To preserve and archive the data for future generations it’s very essential to prove the authenticity of the data. It can be achieved by extracting the metadata from the data which can prove the authenticity and create trust on the archived data. Subsequent challenge is the technology obsolescence. Metadata extraction and standardization can be effectively used to resolve and tackle this problem. Metadata can be categorized at two levels i.e. Technical and Domain level broadly. Technical metadata will provide the information that can be used to understand and interpret the data record, but only this level of metadata isn’t sufficient to create trustworthiness. We have developed a tool which will extract and standardize the technical as well as domain level metadata. This paper is about the different features of the tool and how we have developed this.  

Keywords: Digital Preservation, Metadata, OAIS, PDI, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
113 Recovering Artifacts from Legacy Systems Using Pattern Matching

Authors: Ghulam Rasool, Ilka Philippow

Abstract:

Modernizing legacy applications is the key issue facing IT managers today because there's enormous pressure on organizations to change the way they run their business to meet the new requirements. The importance of software maintenance and reengineering is forever increasing. Understanding the architecture of existing legacy applications is the most critical issue for maintenance and reengineering. The artifacts recovery can be facilitated with different recovery approaches, methods and tools. The existing methods provide static and dynamic set of techniques for extracting architectural information, but are not suitable for all users in different domains. This paper presents a simple and lightweight pattern extraction technique to extract different artifacts from legacy systems using regular expression pattern specifications with multiple language support. We used our custom-built tool DRT to recover artifacts from existing system at different levels of abstractions. In order to evaluate our approach a case study is conducted.

Keywords: Artifacts recovery, Pattern matching, Reverseengineering, Program understanding, Regular expressions, Sourcecode analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
112 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
111 A Tree Based Association Rule Approach for XML Data with Semantic Integration

Authors: D. Sasikala, K. Premalatha

Abstract:

The use of eXtensible Markup Language (XML) in web, business and scientific databases lead to the development of methods, techniques and systems to manage and analyze XML data. Semi-structured documents suffer due to its heterogeneity and dimensionality. XML structure and content mining represent convergence for research in semi-structured data and text mining. As the information available on the internet grows drastically, extracting knowledge from XML documents becomes a harder task. Certainly, documents are often so large that the data set returned as answer to a query may also be very big to convey the required information. To improve the query answering, a Semantic Tree Based Association Rule (STAR) mining method is proposed. This method provides intentional information by considering the structure, content and the semantics of the content. The method is applied on Reuter’s dataset and the results show that the proposed method outperforms well.

Keywords: Semi--structured Document, Tree based Association Rule (TAR), Semantic Association Rule Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2294
110 Pulse Oximeter Concept for Vascular Occlusion Test

Authors: Fatanah M. Suhaimi, J. Geoffrey Chase, Christopher G. Pretty, Rodney Elliott, Geoffrey M. Shaw

Abstract:

Microcirculatory dysfunction is very common in sepsis and may results in organ failure and increased risk of death. Analyzing oxygen utilization can potentially assess microcirculation function of an individual. In this study, a modified pulse oximeter is used to extract information signals due to absorption of red (R) and infrared (IR) light. IR and R signal are related to the overall blood volume and reduced hemoglobin, respectively. Differences between these two signals thus represent the amount of oxygenated hemoglobin. Avascular occlusion test has been conducted on healthy individuals to validate the pulse oximeter concept. In this test, both R and IR signals rapidly changed according to the occlusion process. The pulse oximeter concept presented is capable of extracting valuable information to assess microcirculation condition. Implementing this concept on ICU patients has the potential to aid sepsis diagnosis and provide more accurate tracking of patient state and sepsis status.

Keywords: Microcirculation, sepsis, sepsis diagnosis, oxygen extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
109 Face Texture Reconstruction for Illumination Variant Face Recognition

Authors: Pengfei Xiong, Lei Huang, Changping Liu

Abstract:

In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.

Keywords: texture reconstruction, illumination, face recognition, subspaces

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
108 Manufacturing Dispersions Based Simulation and Synthesis of Design Tolerances

Authors: Nassima Cheikh, Abdelmadjid Cheikh, Said Hamou

Abstract:

The objective of this work which is based on the approach of simultaneous engineering is to contribute to the development of a CIM tool for the synthesis of functional design dimensions expressed by average values and tolerance intervals. In this paper, the dispersions method known as the Δl method which proved reliable in the simulation of manufacturing dimensions is used to develop a methodology for the automation of the simulation. This methodology is constructed around three procedures. The first procedure executes the verification of the functional requirements by automatically extracting the functional dimension chains in the mechanical sub-assembly. Then a second procedure performs an optimization of the dispersions on the basis of unknown variables. The third procedure uses the optimized values of the dispersions to compute the optimized average values and tolerances of the functional dimensions in the chains. A statistical and cost based approach is integrated in the methodology in order to take account of the capabilities of the manufacturing processes and to distribute optimal values among the individual components of the chains.

Keywords: functional tolerances, manufacturing dispersions, simulation, CIM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
107 A Novel Algorithm for Parsing IFC Models

Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai

Abstract:

Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD).

Keywords: BIM, CAD, IFC, MVD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2667
106 Learning Spatio-Temporal Topology of a Multi-Camera Network by Tracking Multiple People

Authors: Yunyoung Nam, Junghun Ryu, Yoo-Joo Choi, We-Duke Cho

Abstract:

This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

Keywords: Surveillance, multiple camera, people tracking, topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
105 Glass Bottle Inspector Based on Machine Vision

Authors: Huanjun Liu, Yaonan Wang, Feng Duan

Abstract:

This text studies glass bottle intelligent inspector based machine vision instead of manual inspection. The system structure is illustrated in detail in this paper. The text presents the method based on watershed transform methods to segment the possible defective regions and extract features of bottle wall by rules. Then wavelet transform are used to exact features of bottle finish from images. After extracting features, the fuzzy support vector machine ensemble is putted forward as classifier. For ensuring that the fuzzy support vector machines have good classification ability, the GA based ensemble method is used to combining the several fuzzy support vector machines. The experiments demonstrate that using this inspector to inspect glass bottles, the accuracy rate may reach above 97.5%.

Keywords: Intelligent Inspection, Support Vector Machines, Ensemble Methods, watershed transform, Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3843
104 A Prediction-Based Reversible Watermarking for MRI Images

Authors: Nuha Omran Abokhdair, Azizah Bt Abdul Manaf

Abstract:

Reversible watermarking is a special branch of image watermarking, that is able to recover the original image after extracting the watermark from the image. In this paper, an adaptive prediction-based reversible watermarking scheme is presented, in order to increase the payload capacity of MRI medical images. The scheme divides the image into two parts, Region of Interest (ROI) and Region of Non-Interest (RONI). Two bits are embedded in each embeddable pixel of RONI and one bit is embedded in each embeddable pixel of ROI. The experimental results demonstrate that the proposed scheme is able to achieve high embedding capacity. This is mainly caused by two reasons. First, the pixels that were excluded from data embedding due to overflow/underflow are used for data embedding. Second, large location map that need to be added to watermark data as overhead is eliminated and thus lower data embedding capacity is prevented. Moreover, the scheme provides good visual quality to the watermarked image.

Keywords: Medical image watermarking, reversible watermarking, Difference Expansion, Prediction-Error Expansion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
103 Fracture Toughness Characterization of Carbon-Epoxy Composite using Arcan Specimen

Authors: M. Nikbakht, N. Choupani

Abstract:

In this study the behavior of interlaminar fracture of carbon-epoxy thermoplastic laminated composite is investigated numerically and experimentally. Tests are performed with Arcan specimens. Testing with Arcan specimen gives the opportunity of utilizing just one kind of specimen for extracting fracture properties for mode I, mode II and different mixed mode ratios of materials with exerting load via different loading angles. Variation of loading angles in range of 0-90° made possible to achieve different mixed mode ratios. Correction factors for various conditions are obtained from ABAQUS 2D finite element models which demonstrate the finite shape of Arcan specimens used in this study. Finally, applying the correction factors to critical loads obtained experimentally, critical interlaminar fracture toughness of this type of carbon- epoxy composite has been attained.

Keywords: Fracture Mechanics, Mixed Mode, Arcan Specimen, Finite Element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2865
102 One-Class Support Vector Machines for Aerial Images Segmentation

Authors: Chih-Hung Wu, Chih-Chin Lai, Chun-Yen Chen, Yan-He Chen

Abstract:

Interpretation of aerial images is an important task in various applications. Image segmentation can be viewed as the essential step for extracting information from aerial images. Among many developed segmentation methods, the technique of clustering has been extensively investigated and used. However, determining the number of clusters in an image is inherently a difficult problem, especially when a priori information on the aerial image is unavailable. This study proposes a support vector machine approach for clustering aerial images. Three cluster validity indices, distance-based index, Davies-Bouldin index, and Xie-Beni index, are utilized as quantitative measures of the quality of clustering results. Comparisons on the effectiveness of these indices and various parameters settings on the proposed methods are conducted. Experimental results are provided to illustrate the feasibility of the proposed approach.

Keywords: Aerial imaging, image segmentation, machine learning, support vector machine, cluster validity index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
101 PIELG: A Protein Interaction Extraction Systemusing a Link Grammar Parser from Biomedical Abstracts

Authors: Rania A. Abul Seoud, Nahed H. Solouma, Abou-Baker M. Youssef, Yasser M. Kadah

Abstract:

Due to the ever growing amount of publications about protein-protein interactions, information extraction from text is increasingly recognized as one of crucial technologies in bioinformatics. This paper presents a Protein Interaction Extraction System using a Link Grammar Parser from biomedical abstracts (PIELG). PIELG uses linkage given by the Link Grammar Parser to start a case based analysis of contents of various syntactic roles as well as their linguistically significant and meaningful combinations. The system uses phrasal-prepositional verbs patterns to overcome preposition combinations problems. The recall and precision are 74.4% and 62.65%, respectively. Experimental evaluations with two other state-of-the-art extraction systems indicate that PIELG system achieves better performance. For further evaluation, the system is augmented with a graphical package (Cytoscape) for extracting protein interaction information from sequence databases. The result shows that the performance is remarkably promising.

Keywords: Link Grammar Parser, Interaction extraction, protein-protein interaction, Natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199
100 Gain Tuning Fuzzy Controller for an Optical Disk Drive

Authors: Shiuh-Jer Huang, Ming-Tien Su

Abstract:

Since the driving speed and control accuracy of commercial optical disk are increasing significantly, it needs an efficient controller to monitor the track seeking and following operations of the servo system for achieving the desired data extracting response. The nonlinear behaviors of the actuator and servo system of the optical disk drive will influence the laser spot positioning. Here, the model-free fuzzy control scheme is employed to design the track seeking servo controller for a d.c. motor driving optical disk drive system. In addition, the sliding model control strategy is introduced into the fuzzy control structure to construct a 1-D adaptive fuzzy rule intelligent controller for simplifying the implementation problem and improving the control performance. The experimental results show that the steady state error of the track seeking by using this fuzzy controller can maintain within the track width (1.6 μm ). It can be used in the track seeking and track following servo control operations.

Keywords: Fuzzy control, gain tuning and optical disk drive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
99 Flood Hazard Mapping in Dikrong Basin of Arunachal Pradesh (India)

Authors: Aditi Bhadra, Sutapa Choudhury, Daita Kar

Abstract:

Flood zoning studies have become more efficient in recent years because of the availability of advanced computational facilities and use of Geographic Information Systems (GIS). In the present study, flood inundated areas were mapped using GIS for the Dikrong river basin of Arunachal Pradesh, India, corresponding to different return periods (2, 5, 25, 50, and 100 years). Further, the developed inundation maps corresponding to 25, 50, and 100 year return period floods were compared to corresponding maps developed by conventional methods as reported in the Brahmaputra Board Master Plan for Dikrong basin. It was found that, the average deviation of modelled flood inundation areas from reported map inundation areas is below 5% (4.52%). Therefore, it can be said that the modelled flood inundation areas matched satisfactorily with reported map inundation areas. Hence, GIS techniques were proved to be successful in extracting the flood inundation extent in a time and cost effective manner for the remotely located hilly basin of Dikrong, where conducting conventional surveys is very difficult.

Keywords: Flood hazard mapping, GIS, inundation area, return period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2948
98 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning

Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul

Abstract:

In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.

Keywords: Electrocardiogram, dictionary learning, sparse coding, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
97 Automatic Vehicle Identification by Plate Recognition

Authors: Serkan Ozbay, Ergun Ercelebi

Abstract:

Automatic Vehicle Identification (AVI) has many applications in traffic systems (highway electronic toll collection, red light violation enforcement, border and customs checkpoints, etc.). License Plate Recognition is an effective form of AVI systems. In this study, a smart and simple algorithm is presented for vehicle-s license plate recognition system. The proposed algorithm consists of three major parts: Extraction of plate region, segmentation of characters and recognition of plate characters. For extracting the plate region, edge detection algorithms and smearing algorithms are used. In segmentation part, smearing algorithms, filtering and some morphological algorithms are used. And finally statistical based template matching is used for recognition of plate characters. The performance of the proposed algorithm has been tested on real images. Based on the experimental results, we noted that our algorithm shows superior performance in car license plate recognition.

Keywords: Character recognizer, license plate recognition, plate region extraction, segmentation, smearing, template matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7541
96 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System

Authors: R. A. Salam, M.A. Rodrigues

Abstract:

The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.

Keywords: Image mining, feature selection, shape recognition, peak measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
95 An Optical Flow Based Segmentation Method for Objects Extraction

Authors: C. Lodato, S. Lopes

Abstract:

This paper describes a segmentation algorithm based on the cooperation of an optical flow estimation method with edge detection and region growing procedures. The proposed method has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. The addressed problem consists in extracting whole objects from background for producing images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The first task of the algorithm exploits the cues from motion analysis for moving area detection. Objects and background are then refined using respectively edge detection and region growing procedures. These tasks are iteratively performed until objects and background are completely resolved. The developed method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Motion Detection, Object Extraction, Optical Flow, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
94 Reducing Test Vectors Count Using Fault Based Optimization Schemes in VLSI Testing

Authors: Vinod Kumar Khera, R. K. Sharma, A. K. Gupta

Abstract:

Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.

Keywords: Low power VLSI testing, independent fault, essential faults, test vector reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
93 A Multilanguage Source Code Retrieval System Using Structural-Semantic Fingerprints

Authors: Mohamed Amine Ouddan, Hassane Essafi

Abstract:

Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.

Keywords: Source code retrieval, plagiarism detection, clonedetection, sequence alignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
92 OCR for Script Identification of Hindi (Devnagari) Numerals using Feature Sub Selection by Means of End-Point with Neuro-Memetic Model

Authors: Banashree N. P., R. Vasanta

Abstract:

Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuro-memetic model [18] that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuro-memetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.

Keywords: OCR, Global Feature, End-Points, Neuro-Memetic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
91 Identifying Key Success Factor For Supply Chain Management System in the Semiconductor Industry - A Focus Group Approach

Authors: T. P. Lu, B. N. Hwang, T. Z. Liou, Y. L. Lin

Abstract:

Developing a supply chain management (SCM) system is costly, but important. However, because of its complicated nature, not many of such projects are considered successful. Few research publications directly relate to key success factors (KSFs) for implementing a SCM system. Motivated by the above, this research proposes a hierarchy of KSFs for SCM system implementation in the semiconductor industry by using a two-step approach. First, the literature review indicates the initial hierarchy. The second step includes a focus group approach to finalize the proposed KSF hierarchy by extracting valuable experiences from executives and managers that actively participated in a project, which successfully establish a seamless SCM integration between the world's largest semiconductor foundry manufacturing company and the world's largest assembly and testing company. Future project executives may refer the resulting KSF hierarchy as a checklist for SCM system implementation in semiconductor or related industries.

Keywords: Focus group, key success factors, supply chain management, semiconductor industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
90 Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA

Authors: Fumito Yoshikawa, Takumi Kobayashi, Kenji Watanabe, Nobuyuki Otsu

Abstract:

Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.

Keywords: Badminton, CHLAC, MRA, Video-based motiondetection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
89 Robust Camera Calibration using Discrete Optimization

Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck

Abstract:

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
88 Network Application Identification Based on Communication Characteristics of Application Messages

Authors: Yuji Waizumi, Yuya Tsukabe, Hiroshi Tsunoda, Yoshiaki Nemoto

Abstract:

A person-to-person information sharing is easily realized by P2P networks in which servers are not essential. Leakage of information, which are caused by malicious accesses for P2P networks, has become a new social issues. To prevent information leakage, it is necessary to detect and block traffics of P2P software. Since some P2P softwares can spoof port numbers, it is difficult to detect the traffics sent from P2P softwares by using port numbers. It is more difficult to devise effective countermeasures for detecting the software because their protocol are not public. In this paper, a discriminating method of network applications based on communication characteristics of application messages without port numbers is proposed. The proposed method is based on an assumption that there can be some rules about time intervals to transmit messages in application layer and the number of necessary packets to send one message. By extracting the rule from network traffic, the proposed method can discriminate applications without port numbers.

Keywords: Network Application Identification, Message Transition Pattern

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308
87 Prototype of a Federative Factory Data Management for the Support of Factory Planning Processes

Authors: Christian Mosch, Reiner Anderl, Antonio Álvaro de Assis Moura, Klaus Schützer

Abstract:

Due to short product life cycles, increasing variety of products and short cycles of leap innovations manufacturing companies have to increase the flexibility of factory structures. Flexibility of factory structures is based on defined factory planning processes in which product, process and resource data of various partial domains have to be considered. Thus factory planning processes can be characterized as iterative, interdisciplinary and participative processes [1]. To support interdisciplinary and participative character of planning processes, a federative factory data management (FFDM) as a holistic solution will be described. FFDM is already implemented in form of a prototype. The interim results of the development of FFDM will be shown in this paper. The principles are the extracting of product, process and resource data from documents of various partial domains providing as web services on a server. The described data can be requested by the factory planner by using a FFDM-browser.

Keywords: BRAGECRIM, Factory Planning Process, FactoryData Management, Web Services

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
86 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: Steganography, stego, LSB, crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
85 Bridging Quantitative and Qualitative of Glaucoma Detection

Authors: Noor Elaiza Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and Norharyati Md Ariff

Abstract:

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Keywords: Digital Fundus Image, Glaucoma Detection, Multithresholding, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988