Search results for: Ultrasound prostate segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 435

Search results for: Ultrasound prostate segmentation

75 Comparative Analysis of Different Page Ranking Algorithms

Authors: S. Prabha, K. Duraiswamy, J. Indhumathi

Abstract:

Search engine plays an important role in internet, to retrieve the relevant documents among the huge number of web pages. However, it retrieves more number of documents, which are all relevant to your search topics. To retrieve the most meaningful documents related to search topics, ranking algorithm is used in information retrieval technique. One of the issues in data miming is ranking the retrieved document. In information retrieval the ranking is one of the practical problems. This paper includes various Page Ranking algorithms, page segmentation algorithms and compares those algorithms used for Information Retrieval. Diverse Page Rank based algorithms like Page Rank (PR), Weighted Page Rank (WPR), Weight Page Content Rank (WPCR), Hyperlink Induced Topic Selection (HITS), Distance Rank, Eigen Rumor, Distance Rank Time Rank, Tag Rank, Relational Based Page Rank and Query Dependent Ranking algorithms are discussed and compared.

Keywords: Information Retrieval, Web Page Ranking, search engine, web mining, page segmentations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4259
74 Quadratic Pulse Inversion Ultrasonic Imaging(QPI): A Two-Step Procedure for Optimization of Contrast Sensitivity and Specificity

Authors: Mamoun F. Al-Mistarihi

Abstract:

We have previously introduced an ultrasonic imaging approach that combines harmonic-sensitive pulse sequences with a post-beamforming quadratic kernel derived from a second-order Volterra filter (SOVF). This approach is designed to produce images with high sensitivity to nonlinear oscillations from microbubble ultrasound contrast agents (UCA) while maintaining high levels of noise rejection. In this paper, a two-step algorithm for computing the coefficients of the quadratic kernel leading to reduction of tissue component introduced by motion, maximizing the noise rejection and increases the specificity while optimizing the sensitivity to the UCA is presented. In the first step, quadratic kernels from individual singular modes of the PI data matrix are compared in terms of their ability of maximize the contrast to tissue ratio (CTR). In the second step, quadratic kernels resulting in the highest CTR values are convolved. The imaging results indicate that a signal processing approach to this clinical challenge is feasible.

Keywords: Volterra Filter, Pulse Inversion, Ultrasonic Imaging, Contrast Agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
73 Belief Theory-Based Classifiers Comparison for Static Human Body Postures Recognition in Video

Authors: V. Girondel, L. Bonnaud, A. Caplier, M. Rombaut

Abstract:

This paper presents various classifiers results from a system that can automatically recognize four different static human body postures in video sequences. The considered postures are standing, sitting, squatting, and lying. The three classifiers considered are a naïve one and two based on the belief theory. The belief theory-based classifiers use either a classic or restricted plausibility criterion to make a decision after data fusion. The data come from the people 2D segmentation and from their face localization. Measurements consist in distances relative to a reference posture. The efficiency and the limits of the different classifiers on the recognition system are highlighted thanks to the analysis of a great number of results. This system allows real-time processing.

Keywords: Belief theory, classifiers comparison, data fusion, human motion analysis, real-time processing, static posture recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
72 Cardiac Disorder Classification Based On Extreme Learning Machine

Authors: Chul Kwak, Oh-Wook Kwon

Abstract:

In this paper, an extreme learning machine with an automatic segmentation algorithm is applied to heart disorder classification by heart sound signals. From continuous heart sound signals, the starting points of the first (S1) and the second heart pulses (S2) are extracted and corrected by utilizing an inter-pulse histogram. From the corrected pulse positions, a single period of heart sound signals is extracted and converted to a feature vector including the mel-scaled filter bank energy coefficients and the envelope coefficients of uniform-sized sub-segments. An extreme learning machine is used to classify the feature vector. In our cardiac disorder classification and detection experiments with 9 cardiac disorder categories, the proposed method shows significantly better performance than multi-layer perceptron, support vector machine, and hidden Markov model; it achieves the classification accuracy of 81.6% and the detection accuracy of 96.9%.

Keywords: Heart sound classification, extreme learning machine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
71 Player Number Localization and Recognition in Soccer Video using HSV Color Space and Internal Contours

Authors: Matko Šaric, Hrvoje Dujmic, Vladan Papic, Nikola Rožic

Abstract:

Detection of player identity is challenging task in sport video content analysis. In case of soccer video player number recognition is effective and precise solution. Jersey numbers can be considered as scene text and difficulties in localization and recognition appear due to variations in orientation, size, illumination, motion etc. This paper proposed new method for player number localization and recognition. By observing hue, saturation and value for 50 different jersey examples we noticed that most often combination of low and high saturated pixels is used to separate number and jersey region. Image segmentation method based on this observation is introduced. Then, novel method for player number localization based on internal contours is proposed. False number candidates are filtered using area and aspect ratio. Before OCR processing extracted numbers are enhanced using image smoothing and rotation normalization.

Keywords: player number, soccer video, HSV color space

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
70 A New Approach for Counting Passersby Utilizing Space-Time Images

Authors: A. Elmarhomy, S. Karungaru, K. Terada

Abstract:

Understanding the number of people and the flow of the persons is useful for efficient promotion of the institution managements and company-s sales improvements. This paper introduces an automated method for counting passerby using virtualvertical measurement lines. The process of recognizing a passerby is carried out using an image sequence obtained from the USB camera. Space-time image is representing the human regions which are treated using the segmentation process. To handle the problem of mismatching, different color space are used to perform the template matching which chose automatically the best matching to determine passerby direction and speed. A relation between passerby speed and the human-pixel area is used to distinguish one or two passersby. In the experiment, the camera is fixed at the entrance door of the hall in a side viewing position. Finally, experimental results verify the effectiveness of the presented method by correctly detecting and successfully counting them in order to direction with accuracy of 97%.

Keywords: counting passersby, virtual-vertical measurement line, passerby speed, space-time image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385
69 Posture Recognition using Combined Statistical and Geometrical Feature Vectors based on SVM

Authors: Omer Rashid, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis

Abstract:

It is hard to percept the interaction process with machines when visual information is not available. In this paper, we have addressed this issue to provide interaction through visual techniques. Posture recognition is done for American Sign Language to recognize static alphabets and numbers. 3D information is exploited to obtain segmentation of hands and face using normal Gaussian distribution and depth information. Features for posture recognition are computed using statistical and geometrical properties which are translation, rotation and scale invariant. Hu-Moment as statistical features and; circularity and rectangularity as geometrical features are incorporated to build the feature vectors. These feature vectors are used to train SVM for classification that recognizes static alphabets and numbers. For the alphabets, curvature analysis is carried out to reduce the misclassifications. The experimental results show that proposed system recognizes posture symbols by achieving recognition rate of 98.65% and 98.6% for ASL alphabets and numbers respectively.

Keywords: Feature Extraction, Posture Recognition, Pattern Recognition, Application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
68 Fingerprint Verification System Using Minutiae Extraction Technique

Authors: Manvjeet Kaur, Mukhwinder Singh, Akshay Girdhar, Parvinder S. Sandhu

Abstract:

Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.

Keywords: Biometrics, Minutiae, Crossing number, False Accept Rate (FAR), False Reject Rate (FRR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6534
67 OCIRS: An Ontology-based Chinese Idioms Retrieval System

Authors: Hu Haibo, Tu Chunmei, Fu Chunlei, Fu Li, Mao Fan, Ma Yuan

Abstract:

Chinese Idioms are a type of traditional Chinese idiomatic expressions with specific meanings and stereotypes structure which are widely used in classical Chinese and are still common in vernacular written and spoken Chinese today. Currently, Chinese Idioms are retrieved in glossary with key character or key word in morphology or pronunciation index that can not meet the need of searching semantically. OCIRS is proposed to search the desired idiom in the case of users only knowing its meaning without any key character or key word. The user-s request in a sentence or phrase will be grammatically analyzed in advance by word segmentation, key word extraction and semantic similarity computation, thus can be mapped to the idiom domain ontology which is constructed to provide ample semantic relations and to facilitate description logics-based reasoning for idiom retrieval. The experimental evaluation shows that OCIRS realizes the function of searching idioms via semantics, obtaining preliminary achievement as requested by the users.

Keywords: Chinese idiom, idiom retrieval, semantic searching, ontology, semantics similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
66 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.

Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179
65 Data Gathering and Analysis for Arabic Historical Documents

Authors: Ali Dulla

Abstract:

This paper introduces a new dataset (and the methodology used to generate it) based on a wide range of historical Arabic documents containing clean data simple and homogeneous-page layouts. The experiments are implemented on printed and handwritten documents obtained respectively from some important libraries such as Qatar Digital Library, the British Library and the Library of Congress. We have gathered and commented on 150 archival document images from different locations and time periods. It is based on different documents from the 17th-19th century. The dataset comprises differing page layouts and degradations that challenge text line segmentation methods. Ground truth is produced using the Aletheia tool by PRImA and stored in an XML representation, in the PAGE (Page Analysis and Ground truth Elements) format. The dataset presented will be easily available to researchers world-wide for research into the obstacles facing various historical Arabic documents such as geometric correction of historical Arabic documents.

Keywords: Dataset production, ground truth production, historical documents, arbitrary warping, geometric correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 833
64 Smart Cane Assisted Mobility for the Visually Impaired

Authors: Jayant Sakhardande, Pratik Pattanayak, Mita Bhowmick

Abstract:

An efficient reintegration of the disabled people in the family and society should be fulfilled; hence it is strongly needful to assist their diminished functions or to replace the totally lost functions. Assistive technology helps in neutralizing the impairment. Recent advancements in embedded systems have opened up a vast area of research and development for affordable and portable assistive devices for the visually impaired. Granted there are many assistive devices on the market that are able to detect obstacles, and numerous research and development currently in process to alleviate the cause, unfortunately the cost of devices, size of devices, intrusiveness and higher learning curve prevents the visually impaired from taking advantage of available devices. This project aims at the design and implementation of a detachable unit which is robust, low cost and user friendly, thus, trying to aggrandize the functionality of the existing white cane, to concede above-knee obstacle detection. The designed obstruction detector uses ultrasound sensors for detecting the obstructions before direct contact. It bestows haptic feedback to the user in accordance with the position of the obstacle.

Keywords: Visually impaired, Ultrasonic sensors, Obstruction detector, Mobility aid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6030
63 Segmentation and Recognition of Handwritten Numeric Chains

Authors: Salim Ouchtati, Bedda Mouldi, Abderrazak Lachouri

Abstract:

In this paper we present an off line system for the recognition of the handwritten numeric chains. Our work is divided in two big parts. The first part is the realization of a recognition system of the isolated handwritten digits. In this case the study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the digits by several methods: the distribution sequence, the Barr features and the centred moments of the different projections and profiles. The second part is the extension of our system for the reading of the handwritten numeric chains constituted of a variable number of digits. The vertical projection is used to segment the numeric chain at isolated digits and every digit (or segment) will be presented separately to the entry of the system achieved in the first part (recognition system of the isolated handwritten digits). The result of the recognition of the numeric chain will be displayed at the exit of the global system.

Keywords: Optical Characters Recognition, Neural networks, Barr features, Image processing, Pattern Recognition, Featuresextraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
62 Automatic Detection of Proliferative Cells in Immunohistochemically Images of Meningioma Using Fuzzy C-Means Clustering and HSV Color Space

Authors: Vahid Anari, Mina Bakhshi

Abstract:

Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.

Keywords: Positive cell, color segmentation, HSV color space, immunohistochemistry, meningioma, thresholding, fuzzy c-means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
61 Talent in Autism: Cognitive Style based on Weak Central Coherence and Special Sensory Characteristics in State of Kuwait: Case Study

Authors: Mariam Abdulaziz Y.Esmaeel

Abstract:

The study aimed to identify the nature of autistic talent, the manifestations of their weak central coherence, and their sensory characteristics. The case study consisted of four talented autistic males. Two of them in drawing, one in clay formation and one in jigsaw puzzle. Tools of data collection were Group Embedded Figures Test, Block Design Test, Sensory Profile Checklist Revised, Interview forms and direct observation. Results indicated that talent among autistics emerges in limited domain and being extraordinary for each case. Also overlapping construction properties. Indeed, they show three perceptual aspects of weak central coherence: The weak in visual spatial-constructional coherence, the weak in perceptual coherence and the weak in verbal – semantic coherence. Moreover, the majority of the study cases used the three strategies of weak central coherence (segmentation, obliqueness and rotation). As for the sensory characteristics, all study cases have numbers of that characteristics that especially emerges in the visual system.

Keywords: Autism, Central Coherence, Savant, Sensory characteristics, Talent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2651
60 Bridging Quantitative and Qualitative of Glaucoma Detection

Authors: Noor Elaiza Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and Norharyati Md Ariff

Abstract:

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Keywords: Digital Fundus Image, Glaucoma Detection, Multithresholding, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
59 Providing Medical Information in Braille: Research and Development of Automatic Braille Translation Program for Japanese “eBraille“

Authors: Aki Sugano, Mika Ohta, Mineko Ikegami, Kenji Miura, Sayo Tsukamoto, Akihiro Ichinose, Toshiko Ohshima, Eiichi Maeda, Masako Matsuura, Yutaka Takao

Abstract:

Along with the advances in medicine, providing medical information to individual patient is becoming more important. In Japan such information via Braille is hardly provided to blind and partially sighted people. Thus we are researching and developing a Web-based automatic translation program “eBraille" to translate Japanese text into Japanese Braille. First we analyzed the Japanese transcription rules to implement them on our program. We then added medical words to the dictionary of the program to improve its translation accuracy for medical text. Finally we examined the efficacy of statistical learning models (SLMs) for further increase of word segmentation accuracy in braille translation. As a result, eBraille had the highest translation accuracy in the comparison with other translation programs, improved the accuracy for medical text and is utilized to make hospital brochures in braille for outpatients and inpatients.

Keywords: Automatic Braille translation, Medical text, Partially sighted people.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
58 Achieving Shear Wave Elastography by a Three-element Probe for Wearable Human-machine Interface

Authors: Jipeng Yan, Xingchen Yang, Xiaowei Zhou, Mengxing Tang, Honghai Liu

Abstract:

Shear elastic modulus of skeletal muscles can be obtained by shear wave elastography (SWE) and has been linearly related to muscle force. However, SWE is currently implemented using array probes. Price and volumes of these probes and their driving equipment prevent SWE from being used in wearable human-machine interfaces (HMI). Moreover, beamforming processing for array probes reduces the real-time performance. To achieve SWE by wearable HMIs, a customized three-element probe is adopted in this work, with one element for acoustic radiation force generation and the others for shear wave tracking. In-phase quadrature demodulation and 2D autocorrelation are adopted to estimate velocities of tissues on the sound beams of the latter two elements. Shear wave speeds are calculated by phase shift between the tissue velocities. Three agar phantoms with different elasticities were made by changing the weights of agar. Values of the shear elastic modulus of the phantoms were measured as 8.98, 23.06 and 36.74 kPa at a depth of 7.5 mm respectively. This work verifies the feasibility of measuring shear elastic modulus by wearable devices.

Keywords: Shear elastic modulus, skeletal muscle, ultrasound, wearable human-machine interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
57 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3385
56 Weld Defect Detection in Industrial Radiography Based Digital Image Processing

Authors: N. Nacereddine, M. Zelmat, S. S. Belaïfa, M. Tridi

Abstract:

Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.

Keywords: Digital image processing, global and localapproaches, radiographic film, weld defect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4031
55 Novel Security Strategy for Real Time Digital Videos

Authors: Prakash Devale, R. S. Prasad, Amol Dhumane, Pritesh Patil

Abstract:

Now a days video data embedding approach is a very challenging and interesting task towards keeping real time video data secure. We can implement and use this technique with high-level applications. As the rate-distortion of any image is not confirmed, because the gain provided by accurate image frame segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with a lot factors like losses that depend on both the coding scheme and the object structure. By using rate controller in association with the encoder one can dynamically adjust the target bitrate. This paper discusses about to keep secure videos by mixing signature data with negligible distortion in the original video, and to keep steganographic video as closely as possible to the quality of the original video. In this discussion we propose the method for embedding the signature data into separate video frames by the use of block Discrete Cosine Transform. These frames are then encoded by real time encoding H.264 scheme concepts. After processing, at receiver end recovery of original video and the signature data is proposed.

Keywords: Data Hiding, Digital Watermarking, video coding H.264, Rate Control, Block DCT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
54 On-line Lao Handwritten Recognition with Proportional Invariant Feature

Authors: Khampheth Bounnady, Boontee Kruatrachue, Somkiat Wangsiripitak

Abstract:

This paper proposed high level feature for online Lao handwritten recognition. This feature must be high level enough so that the feature is not change when characters are written by different persons at different speed and different proportion (shorter or longer stroke, head, tail, loop, curve). In this high level feature, a character is divided in to sequence of curve segments where a segment start where curve reverse rotation (counter clockwise and clockwise). In each segment, following features are gathered cumulative change in direction of curve (- for clockwise), cumulative curve length, cumulative length of left to right, right to left, top to bottom and bottom to top ( cumulative change in X and Y axis of segment). This feature is simple yet robust for high accuracy recognition. The feature can be gather from parsing the original time sampling sequence X, Y point of the pen location without re-sampling. We also experiment on other segmentation point such as the maximum curvature point which was widely used by other researcher. Experiments results show that the recognition rates are at 94.62% in comparing to using maximum curvature point 75.07%. This is due to a lot of variations of turning points in handwritten.

Keywords: Handwritten feature, chain code, Lao handwritten recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
53 A Structural Support Vector Machine Approach for Biometric Recognition

Authors: Vishal Awasthi, Atul Kumar Agnihotri

Abstract:

Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%.

Keywords: Face recognition, Principal Component Analysis, PCA, Linear Discriminant Analysis, LDA, Improved Support Vector Machine, iSVM, elastic bunch mapping technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 432
52 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images

Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin

Abstract:

Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.

Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
51 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan

Abstract:

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Keywords: Pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
50 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: Finite Element Method, Fluence Distribution, Monte Carlo Method, Photoacoustic Imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643
49 Age, Body Composition, Body Mass Index and Chronic Venous Diseases in Postmenopausal Women

Authors: Grygorii Kostromin, Vladyslav Povoroznyuk

Abstract:

Chronic venous diseases (CVD) are one of the common, though controversial problems in medicine. It is generally accepted that this pathology predominantly occurs in women. The issue of excessive weight as a risk factor for CVD is still considered debatable. To the author's best knowledge, today in Ukraine, there are barely any studies that describe the relationship between CVD and obesity. Our study aims to determine the association between age, body composition, obesity and CVD in postmenopausal women. The study was conducted in D. F. Chebotarev Institute of Gerontology, National Academy of Medical Sciences of Ukraine. We have examined 96 postmenopausal women aged 46-85 years (mean age – 66.19 ± 0.96 years), who were divided into two groups depending on the presence of CVD. The women were examined by vascular surgeons. For the diagnosis of CVD, we used clinical, anatomic and pathophysiologic classifications. We also performed clinical, ultrasound and densitometry examinations. We found that the CVD frequency in postmenopausal women increased with age (from 72% in those aged 45-59 years to 84% in those aged 75-89 years). A significant correlation between the total fat mass and age was determined in postmenopausal women with CVD. We also observed a significant correlation between the lower extremities’ fat mass and age in both examined groups. A significant correlation between body mass index and age was determined only in postmenopausal women without CVD.

Keywords: Chronic venous disease, risk factors, age, obesity, postmenopausal women.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
48 Semi-Automatic Analyzer to Detect Authorial Intentions in Scientific Documents

Authors: Kanso Hassan, Elhore Ali, Soule-dupuy Chantal, Tazi Said

Abstract:

Information Retrieval has the objective of studying models and the realization of systems allowing a user to find the relevant documents adapted to his need of information. The information search is a problem which remains difficult because the difficulty in the representing and to treat the natural languages such as polysemia. Intentional Structures promise to be a new paradigm to extend the existing documents structures and to enhance the different phases of documents process such as creation, editing, search and retrieval. The intention recognition of the author-s of texts can reduce the largeness of this problem. In this article, we present intentions recognition system is based on a semi-automatic method of extraction the intentional information starting from a corpus of text. This system is also able to update the ontology of intentions for the enrichment of the knowledge base containing all possible intentions of a domain. This approach uses the construction of a semi-formal ontology which considered as the conceptualization of the intentional information contained in a text. An experiments on scientific publications in the field of computer science was considered to validate this approach.

Keywords: Information research, text analyzes, intentionalstructure, segmentation, ontology, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614
47 The Role of Velocity Map Quality in Estimation of Intravascular Pressure Distribution

Authors: Ali Pashaee, Parisa Shooshtari, Gholamreza Atae, Nasser Fatouraee

Abstract:

Phase-Contrast MR imaging methods are widely used for measurement of blood flow velocity components. Also there are some other tools such as CT and Ultrasound for velocity map detection in intravascular studies. These data are used in deriving flow characteristics. Some clinical applications are investigated which use pressure distribution in diagnosis of intravascular disorders such as vascular stenosis. In this paper an approach to the problem of measurement of intravascular pressure field by using velocity field obtained from flow images is proposed. The method presented in this paper uses an algorithm to calculate nonlinear equations of Navier- Stokes, assuming blood as an incompressible and Newtonian fluid. Flow images usually suffer the lack of spatial resolution. Our attempt is to consider the effect of spatial resolution on the pressure distribution estimated from this method. In order to achieve this aim, velocity map of a numerical phantom is derived at six different spatial resolutions. To determine the effects of vascular stenoses on pressure distribution, a stenotic phantom geometry is considered. A comparison between the pressure distribution obtained from the phantom and the pressure resulted from the algorithm is presented. In this regard we also compared the effects of collocated and staggered computational grids on the pressure distribution resulted from this algorithm.

Keywords: Flow imaging, pressure distribution estimation, phantom, resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
46 Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards

Authors: Jonghyun Park, Toan Nguyen Dinh, Gueesang Lee

Abstract:

In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.

Keywords: Text detection, edge profile, signboard image, fuzzy clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202