Search results for: Image interpolation
795 Contrast Enhancement of Masses in Mammograms Using Multiscale Morphology
Authors: Amit Kamra, V. K. Jain, Pragya
Abstract:
Mammography is widely used technique for breast cancer screening. There are various other techniques for breast cancer screening but mammography is the most reliable and effective technique. The images obtained through mammography are of low contrast which causes problem for the radiologists to interpret. Hence, a high quality image is mandatory for the processing of the image for extracting any kind of information from it. Many contrast enhancement algorithms have been developed over the years. In the present work, an efficient morphology based technique is proposed for contrast enhancement of masses in mammographic images. The proposed method is based on Multiscale Morphology and it takes into consideration the scale of the structuring element. The proposed method is compared with other stateof- the-art techniques. The experimental results show that the proposed method is better both qualitatively and quantitatively than the other standard contrast enhancement techniques.Keywords: Enhancement, mammography, multi-scale, mathematical morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2258794 Fast Facial Feature Extraction and Matching with Artificial Face Models
Authors: Y. H. Tsai, Y. W. Chen
Abstract:
Facial features are frequently used to represent local properties of a human face image in computer vision applications. In this paper, we present a fast algorithm that can extract the facial features online such that they can give a satisfying representation of a face image. It includes one step for a coarse detection of each facial feature by AdaBoost and another one to increase the accuracy of the found points by Active Shape Models (ASM) in the regions of interest. The resulted facial features are evaluated by matching with artificial face models in the applications of physiognomy. The distance measure between the features and those in the fate models from the database is carried out by means of the Hausdorff distance. In the experiment, the proposed method shows the efficient performance in facial feature extractions and online system of physiognomy.Keywords: Facial feature extraction, AdaBoost, Active shapemodel, Hausdorff distance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809793 Vessel Inscribed Trigonometry to Measure the Vessel Progressive Orientations in the Digital Fundus Image
Authors: Pil Un Kim, Yunjung Lee, Gihyoun Lee, Jin Ho Cho, Myoung Nam Kim
Abstract:
In this paper, the vessel inscribed trigonometry (VITM) for the vessel progression orientation (VPO) is proposed in the two-dimensional fundus image. The VPO is a major factor in the optic disc (OD) detection which is a basic process in the retina analysis. To measure the VPO, skeletons of vessel are used. First, the vessels are classified into three classes as vessel end, vessel branch and vessel stem. And the chain code maps of VS are generated. Next, two farthest neighborhoods of each point on VS are searched by the proposed angle restriction. Lastly, a gradient of the straight line between two farthest neighborhoods is estimated to measure the VPO. VITM is validated by comparing with manual results and 2D Gaussian templates. It is confirmed that VPO of the proposed mensuration is correct enough to detect OD from the results of experiment which applied VITM to detect OD in fundus images.
Keywords: Angle measurement, Optic disc, Retina vessel, Vessel progression orientation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415792 Template-Based Object Detection through Partial Shape Matching and Boundary Verification
Authors: Feng Ge, Tiecheng Liu, Song Wang, Joachim Stahl
Abstract:
This paper presents a novel template-based method to detect objects of interest from real images by shape matching. To locate a target object that has a similar shape to a given template boundary, the proposed method integrates three components: contour grouping, partial shape matching, and boundary verification. In the first component, low-level image features, including edges and corners, are grouped into a set of perceptually salient closed contours using an extended ratio-contour algorithm. In the second component, we develop a partial shape matching algorithm to identify the fractions of detected contours that partly match given template boundaries. Specifically, we represent template boundaries and detected contours using landmarks, and apply a greedy algorithm to search the matched landmark subsequences. For each matched fraction between a template and a detected contour, we estimate an affine transform that transforms the whole template into a hypothetic boundary. In the third component, we provide an efficient algorithm based on oriented edge lists to determine the target boundary from the hypothetic boundaries by checking each of them against image edges. We evaluate the proposed method on recognizing and localizing 12 template leaves in a data set of real images with clutter back-grounds, illumination variations, occlusions, and image noises. The experiments demonstrate the high performance of our proposed method1.Keywords: Object detection, shape matching, contour grouping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302791 A Modified Decoupled Semi-Analytical Approach Based On SBFEM for Solving 2D Elastodynamic Problems
Authors: M. Fakharian, M. I. Khodakarami
Abstract:
In this paper, a new trend for improvement in semianalytical method based on scale boundaries in order to solve the 2D elastodynamic problems is provided. In this regard, only the boundaries of the problem domain discretization are by specific subparametric elements. Mapping functions are uses as a class of higherorder Lagrange polynomials, special shape functions, Gauss-Lobatto- Legendre numerical integration, and the integral form of the weighted residual method, the matrix is diagonal coefficients in the equations of elastodynamic issues. Differences between study conducted and prior research in this paper is in geometry production procedure of the interpolation function and integration of the different is selected. Validity and accuracy of the present method are fully demonstrated through two benchmark problems which are successfully modeled using a few numbers of DOFs. The numerical results agree very well with the analytical solutions and the results from other numerical methods.
Keywords: 2D Elastodynamic Problems, Lagrange Polynomials, G-L-Lquadrature, Decoupled SBFEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983790 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: Classification, fuzzy, inspection system, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742789 Examining Herzberg-s Two Factor Theory in a Large Chinese Chemical Fiber Company
Authors: Ju-Chun Chien
Abstract:
The validity of Herzberg-s Two-Factor Theory of Motivation was tested empirically by surveying 2372 chemical fiber employees in 2012. In the valid sample of 1875 respondents, the degree of overall job satisfaction was more than moderate. The most highly valued components of job satisfaction were: “corporate image," “collaborative working atmosphere," and “supervisor-s expertise"; whereas the lowest mean score was 34.65 for “job rotation and promotion." The top three job retention options rated by the participants were “good image of the enterprise," “good compensation," and “workplace is close to my residence." The overall evaluation of the level of thriving facilitation workplace reached almost to “mostly agree." For those participants who chose at least one motivator as their job retention options had significantly greater job satisfaction than those who chose only hygiene factors as their retention options. Therefore, Herzberg-s Two-Factor Theory of Motivation was proven valid in this study.Keywords: Employee job satisfaction, Job retention, Traditional business, Two-factor theory of motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5412788 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images
Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire
Abstract:
In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.Keywords: Defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2289787 A Technique for Improving the Performance of Median Smoothers at the Corners Characterized by Low Order Polynomials
Authors: E. Srinivasan, D. Ebenezer
Abstract:
Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.
Keywords: Image filtering, detail preservation, median filters, nonlinear filters, order statistics filtering, Rank order filtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373786 Use of Detectors Technology for Gamma Ray Issued from Radioactive Isotopes and its Impact on Knowledge of Behavior of the Stationary Case of Solid Phase Holdup
Authors: Abbas Ali Mahmood Karwi
Abstract:
For gamma radiation detection, assemblies having scintillation crystals and a photomultiplier tube, also there is a preamplifier connected to the detector because the signals from photomultiplier tube are of small amplitude. After pre-amplification the signals are sent to the amplifier and then to the multichannel analyser. The multichannel analyser sorts all incoming electrical signals according to their amplitudes and sorts the detected photons in channels covering small energy intervals. The energy range of each channel depends on the gain settings of the multichannel analyser and the high voltage across the photomultiplier tube. The exit spectrum data of the two main isotopes studied ,putting data in biomass program ,process it by Matlab program to get the solid holdup image (solid spherical nuclear fuel)Keywords: Multichannel analyzer, Spectrum, Energies, Fluids holdup, Image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731785 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563784 Introducing an Image Processing Base Idea for Outdoor Children Caring
Authors: Hooman Jafarabadi
Abstract:
In this paper application of artificial intelligence for baby and children caring is studied. Then a new idea for injury prevention and safety announcement is presented by using digital image processing. The paper presents the structure of the proposed system. The system determines the possibility of the dangers for children and babies in yards, gardens and swimming pools or etc. In the presented idea, multi camera System is used and receiver videos are processed to find the hazardous areas then the entrance of children and babies in the determined hazardous areas are analyzed. In this condition the system does the programmed action capture, produce alarm or tone or send message.Keywords: Baby and children Care and Nursing, Intelligent Control Systems for Nursing, Electronic Care and Nursing, Dangers and safety for children and babies, Motion detection, Expert danger alarm systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875783 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2478782 The Design of Imaginable Urban Road Landscape
Authors: Wang Zhenzhen, Wang Xu, Hong Liangping
Abstract:
With the rapid development of cities, the way that people commute has changed greatly, meanwhile, people turn to require more on physical and psychological aspects in the contemporary world. However, the current urban road landscape ignores these changes, for example, those road landscape elements are boring, confusing, fragmented and lack of integrity and hierarchy. Under such current situation, in order to shape beautiful, identifiable and unique road landscape, this article concentrates on the target of imaginability. This paper analyzes the main elements of the urban road landscape, the concept of image and its generation mechanism, and then discusses the necessity and connotation of building imaginable urban road landscape as well as the main problems existing in current urban road landscape in terms of imaginability. Finally, this paper proposes how to design imaginable urban road landscape in details based on a specific case.
Keywords: Identifiability, imaginability, road landscape, the image of the city.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3412781 Distortion Estimation in Digital Image Watermarking using Genetic Programming
Authors: Labiba Gilani, Asifullah Khan, Anwar M. Mirza
Abstract:
This paper introduces a technique of distortion estimation in image watermarking using Genetic Programming (GP). The distortion is estimated by considering the problem of obtaining a distorted watermarked signal from the original watermarked signal as a function regression problem. This function regression problem is solved using GP, where the original watermarked signal is considered as an independent variable. GP-based distortion estimation scheme is checked for Gaussian attack and Jpeg compression attack. We have used Gaussian attacks of different strengths by changing the standard deviation. JPEG compression attack is also varied by adding various distortions. Experimental results demonstrate that the proposed technique is able to detect the watermark even in the case of strong distortions and is more robust against attacks.Keywords: Blind Watermarking, Genetic Programming (GP), Fitness Function, Discrete Cosine Transform (DCT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707780 Digital Watermarking Based on Visual Cryptography and Histogram
Authors: R. Rama Kishore, Sunesh
Abstract:
Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.
Keywords: Butterworth filter, digital watermarking, histogram, visual cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675779 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608778 High Performance VLSI Architecture of 2D Discrete Wavelet Transform with Scalable Lattice Structure
Authors: Juyoung Kim, Taegeun Park
Abstract:
In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Keywords: discrete wavelet transform, VLSI architecture, QMF lattice filter, pipelining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780777 Color View Synthesis for Animated Depth Security X-ray Imaging
Authors: O. Abusaeeda, J. P. O Evans, D. Downes
Abstract:
We demonstrate the synthesis of intermediary views within a sequence of color encoded, materials discriminating, X-ray images that exhibit animated depth in a visual display. During the image acquisition process, the requirement for a linear X-ray detector array is replaced by synthetic image. Scale Invariant Feature Transform, SIFT, in combination with material segmented morphing is employed to produce synthetic imagery. A quantitative analysis of the feature matching performance of the SIFT is presented along with a comparative study of the synthetic imagery. We show that the total number of matches produced by SIFT reduces as the angular separation between the generating views increases. This effect is accompanied by an increase in the total number of synthetic pixel errors. The trends observed are obtained from 15 different luggage items. This programme of research is in collaboration with the UK Home Office and the US Dept. of Homeland Security.Keywords: X-ray, kinetic depth, view synthesis, KDE
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661776 Robot Vision Application based on Complex 3D Pose Computation
Authors: F. Rotaru, S. Bejinariu, C. D. Niţâ, R. Luca, I. Pâvâloi, C. Lazâr
Abstract:
The paper presents a technique suitable in robot vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods are based on the correspondence of image features established at a training step and exactly the same image features extracted at the execution step, for a different object pose. When such a correspondence is not feasible because of the lack of specific features a new method is proposed. In the first step the method computes from two views the 3D pose of feature points. Subsequently, using a registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted at the training phase. The result is a Euclidean transform which have to be used by robot head for reorientation at execution step.Keywords: features correspondence, registration algorithm, robot vision, triangulation method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470775 Semi-Lagrangian Method for Advection Equation on GPU in Unstructured R3 Mesh for Fluid Dynamics Application
Authors: Irakli V. Gugushvili, Nickolay M. Evstigneev
Abstract:
Numerical integration of initial boundary problem for advection equation in 3 ℜ is considered. The method used is conditionally stable semi-Lagrangian advection scheme with high order interpolation on unstructured mesh. In order to increase time step integration the BFECC method with limiter TVD correction is used. The method is adopted on parallel graphic processor unit environment using NVIDIA CUDA and applied in Navier-Stokes solver. It is shown that the calculation on NVIDIA GeForce 8800 GPU is 184 times faster than on one processor AMDX2 4800+ CPU. The method is extended to the incompressible fluid dynamics solver. Flow over a Cylinder for 3D case is compared to the experimental data.
Keywords: Advection equations, CUDA technology, Flow overthe 3D Cylinder, Incompressible Pressure Projection Solver, Parallel computation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2844774 Recognition and Reconstruction of Partially Occluded Objects
Authors: Michela Lecca, Stefano Messelodi
Abstract:
A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.
Keywords: Occluded Object Recognition, Shape Reconstruction, Automatic Self-Adaptive Systems, Linear Cut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284773 Investigation of New Gait Representations for Improving Gait Recognition
Authors: Chirawat Wattanapanich, Hong Wei
Abstract:
This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.
Keywords: Convolutional image, lower knee, gait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067772 Skyline Extraction using a Multistage Edge Filtering
Authors: Byung-Ju Kim, Jong-Jin Shin, Hwa-Jin Nam, Jin-Soo Kim
Abstract:
Skyline extraction in mountainous images can be used for navigation of vehicles or UAV(unmanned air vehicles), but it is very hard to extract skyline shape because of clutters like clouds, sea lines and field borders in images. We developed the edge-based skyline extraction algorithm using a proposed multistage edge filtering (MEF) technique. In this method, characteristics of clutters in the image are first defined and then the lines classified as clutters are eliminated by stages using the proposed MEF technique. After this processing, we select the last line using skyline measures among the remained lines. This proposed algorithm is robust under severe environments with clutters and has even good performance for infrared sensor images with a low resolution. We tested this proposed algorithm for images obtained in the field by an infrared camera and confirmed that the proposed algorithm produced a better performance and faster processing time than conventional algorithms.Keywords: MEF, mountainous image, navigation, skyline
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870771 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir Movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: Computed Tomography (CT), noise reduction, curve-let, contour-let, Signal to Noise Peak-Peak Ratio (PSNR), Structure Similarity (Ssim), Absorbed Dose to Patient (ADP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2918770 Boundary Segmentation of Microcalcification using Parametric Active Contours
Authors: Abdul Kadir Jumaat, Siti Salmah Yasiran, Wan Eny Zarina Wan Abd Rahman, Aminah Abdul Malek
Abstract:
A mammography image is composed of low contrast area where the breast tissues and the breast abnormalities such as microcalcification can hardly be differentiated by the medical practitioner. This paper presents the application of active contour models (Snakes) for the segmentation of microcalcification in mammography images. Comparison on the microcalcifiation areas segmented by the Balloon Snake, Gradient Vector Flow (GVF) Snake, and Distance Snake is done against the true value of the microcalcification area. The true area value is the average microcalcification area in the original mammography image traced by the expert radiologists. From fifty images tested, the result obtained shows that the accuracy of the Balloon Snake, GVF Snake, and Distance Snake in segmenting boundaries of microcalcification are 96.01%, 95.74%, and 95.70% accuracy respectively. This implies that the Balloon Snake is a better segmentation method to locate the exact boundary of a microcalcification region.
Keywords: Balloon Snake, GVF Snake, Distance Snake, Mammogram, Microcalcifications, Segmentation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725769 Impulse Noise Reduction in Brain Magnetic Resonance Imaging Using Fuzzy Filters
Authors: Benjamin Y. M. Kwan, Hon Keung Kwan
Abstract:
Noise contamination in a magnetic resonance (MR) image could occur during acquisition, storage, and transmission in which effective filtering is required to avoid repeating the MR procedure. In this paper, an iterative asymmetrical triangle fuzzy filter with moving average center (ATMAVi filter) is used to reduce different levels of salt and pepper noise in a brain MR image. Besides visual inspection on filtered images, the mean squared error (MSE) is used as an objective measurement. When compared with the median filter, simulation results indicate that the ATMAVi filter is effective especially for filtering a higher level noise (such as noise density = 0.45) using a smaller window size (such as 3x3) when operated iteratively or using a larger window size (such as 5x5) when operated non-iteratively.Keywords: Brain images, Fuzzy filters, Magnetic resonance imaging, Salt and pepper noise reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2212768 An Experiment on Personal Archiving and Retrieving Image System (PARIS)
Authors: Pei-Jeng Kuo, Terumasa Aoki, Hiroshi Yasuda
Abstract:
PARIS (Personal Archiving and Retrieving Image System) is an experiment personal photograph library, which includes more than 80,000 of consumer photographs accumulated within a duration of approximately five years, metadata based on our proposed MPEG-7 annotation architecture, Dozen Dimensional Digital Content (DDDC), and a relational database structure. The DDDC architecture is specially designed for facilitating the managing, browsing and retrieving of personal digital photograph collections. In annotating process, we also utilize a proposed Spatial and Temporal Ontology (STO) designed based on the general characteristic of personal photograph collections. This paper explains PRAIS system.Keywords: Ontology, Databases and Information Retrieval, MPEG-7, Spatial-Temporal, Digital Library Designs l, metadata, Semantic Web, semi-automatic annotation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1116767 Lithofacies Classification from Well Log Data Using Neural Networks, Interval Neutrosophic Sets and Quantification of Uncertainty
Authors: Pawalai Kraipeerapun, Chun Che Fung, Kok Wai Wong
Abstract:
This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
Keywords: Multiclass classification, feed-forward backpropagation neural network, interval neutrosophic sets, uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632766 A Probability based Pair Extension Method in Protein 2-DE Gel Image Analysis
Authors: Yanhua Jin, Won Suk Lee
Abstract:
The two-dimensional gel electrophoresis method (2-DE) is widely used in Proteomics to separate thousands of proteins in a sample. By comparing the protein expression levels of proteins in a normal sample with those in a diseased one, it is possible to identify a meaningful set of marker proteins for the targeted disease. The major shortcomings of this approach involve inherent noises and irregular geometric distortions of spots observed in 2-DE images. Various experimental conditions can be the major causes of these problems. In the protein analysis of samples, these problems eventually lead to incorrect conclusions. In order to minimize the influence of these problems, this paper proposes a partition based pair extension method that performs spot-matching on a set of gel images multiple times and segregates more reliable mapping results which can improve the accuracy of gel image analysis. The improved accuracy of the proposed method is analyzed through various experiments on real 2-DE images of human liver tissues.Keywords: Proteomics, spot-matching, two-dimensionalelectrophoresis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486