Search results for: Digital Fundus Image
1471 Tracking Objects in Color Image Sequences: Application to Football Images
Authors: Mourad Moussa, Ali Douik, Hassani Messaoud
Abstract:
In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Keywords: Image segmentation, objects tracking, Parzen window, singular value decomposition, target recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19851470 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces
Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet
Abstract:
In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.Keywords: Dropwise condensation, textured surface, image processing, watershed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6911469 Image Processing on Geosynthetic Reinforced Layers to Evaluate Shear Strength and Variations of the Strain Profiles
Authors: S. K. Khosrowshahi, E. Güler
Abstract:
This study investigates the reinforcement function of geosynthetics on the shear strength and strain profile of sand. Conducting a series of simple shear tests, the shearing behavior of the samples under static and cyclic loads was evaluated. Three different types of geosynthetics including geotextile and geonets were used as the reinforcement materials. An image processing analysis based on the optical flow method was performed to measure the lateral displacements and estimate the shear strains. It is shown that besides improving the shear strength, the geosynthetic reinforcement leads a remarkable reduction on the shear strains. The improved layer reduces the required thickness of the soil layer to resist against shear stresses. Consequently, the geosynthetic reinforcement can be considered as a proper approach for the sustainable designs, especially in the projects with huge amount of geotechnical applications like subgrade of the pavements, roadways, and railways.Keywords: Image processing, soil reinforcement, geosynthetics, simple shear test, shear strain profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10451468 Enhancing Pedagogical Practices in Online Arabic Language Instruction: Challenges, Opportunities, and Strategies
Authors: Salah Algabli
Abstract:
As online learning takes center stage, Arabic language instructors face the imperative to adapt their practices for the digital realm. This study investigates the experiences of online Arabic instructors to unveil the pedagogical opportunities and challenges this format presents. Utilizing a transcendental phenomenological approach with 15 diverse participants, the research shines a light on the unique realities of online language teaching at the university level, specifically in the United States. The study proposes theoretical and practical solutions to maximize the benefits of online language learning while mitigating its challenges. Recommendations cater to instructors, researchers, and program coordinators, paving the way for enhancing the quality of online Arabic language education. The findings highlight the need for pedagogical approaches tailored to the online environment, ultimately shaping a future where both instructors and learners thrive in this digital landscape.
Keywords: Online Arabic language learning, pedagogical opportunities and challenges, online Arabic teachers, online language instruction, digital pedagogy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291467 A Context-Sensitive Algorithm for Media Similarity Search
Authors: Guang-Ho Cha
Abstract:
This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results.
Keywords: Context-sensitive search, image search, media search, similarity ranking, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6401466 Development of a Mobile Image-Based Reminder Application to Support Tuberculosis Treatment in Africa
Authors: Haji Ali Haji, Hussein Suleman, Ulrike Rivett
Abstract:
This paper presents the design, development and evaluation of an application prototype developed to support tuberculosis (TB) patients’ treatment adherence. The system makes use of graphics and voice reminders as opposed to text messaging to encourage patients to follow their medication routine. To evaluate the effect of the prototype applications, participants were given mobile phones on which the reminder system was installed. Thirty-eight people, including TB health workers and patients from Zanzibar, Tanzania, participated in the evaluation exercises. The results indicate that the participants found the mobile image-based application is useful to support TB treatment. All participants understood and interpreted the intended meaning of every image correctly. The study findings revealed that the use of a mobile visualbased application may have potential benefit to support TB patients (both literate and illiterate) in their treatment processes.Keywords: ICT4D, mobile technology, tuberculosis, visualbased reminder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19711465 Integration of Image and Patient Data, Software and International Coding Systems for Use in a Mammography Research Project
Authors: V. Balanica, W. I. D. Rae, M. Caramihai, S. Acho, C. P. Herbst
Abstract:
Mammographic images and data analysis to facilitate modelling or computer aided diagnostic (CAD) software development should best be done using a common database that can handle various mammographic image file formats and relate these to other patient information. This would optimize the use of the data as both primary reporting and enhanced information extraction of research data could be performed from the single dataset. One desired improvement is the integration of DICOM file header information into the database, as an efficient and reliable source of supplementary patient information intrinsically available in the images. The purpose of this paper was to design a suitable database to link and integrate different types of image files and gather common information that can be further used for research purposes. An interface was developed for accessing, adding, updating, modifying and extracting data from the common database, enhancing the future possible application of the data in CAD processing. Technically, future developments envisaged include the creation of an advanced search function to selects image files based on descriptor combinations. Results can be further used for specific CAD processing and other research. Design of a user friendly configuration utility for importing of the required fields from the DICOM files must be done.Keywords: Database Integration, Mammogram Classification, Tumour Classification, Computer Aided Diagnosis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19451464 New Efficient Method for Coding Color Images
Authors: Walaa M.Abd-Elhafiez, Wajeb Gharibi
Abstract:
In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into the edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique that is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio.
Keywords: Image compression, color image, Q-coder, quantization, edge-detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16711463 Annotations of Gene Pathways Images in Biomedical Publications Using Siamese Network
Authors: Micheal Olaolu Arowolo, Muhammad Azam, Fei He, Mihail Popescu, Dong Xu
Abstract:
As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Manually annotating pathway diagrams is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy.
Keywords: Biological pathway, gene identification, object detection, Siamese network, ResNet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471462 The Effect of Closed Circuit Television Image Patch Layout on Performance of a Simulated Train-Platform Departure Task
Authors: Aaron J. Small, Craig A. Fletcher
Abstract:
This study investigates the effect of closed circuit television (CCTV) image patch layout on performance of a simulated train-platform departure task. The within-subjects experimental design measures target detection rate and response latency during a CCTV visual search task conducted as part of the procedure for safe train dispatch. Three interface designs were developed by manipulating CCTV image patch layout. Eye movements, perceived workload and system usability were measured across experimental conditions. Task performance was compared to identify significant differences between conditions. The results of this study have not been determined.Keywords: Rail human factors, workload, closed circuit television, platform departure, attention, information processing, interface design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7351461 Improved Zero Text Watermarking Algorithm against Meaning Preserving Attacks
Authors: Jalil Z., Farooq M., Zafar H., Sabir M., Ashraf E.
Abstract:
Internet is largely composed of textual contents and a huge volume of digital contents gets floated over the Internet daily. The ease of information sharing and re-production has made it difficult to preserve author-s copyright. Digital watermarking came up as a solution for copyright protection of plain text problem after 1993. In this paper, we propose a zero text watermarking algorithm based on occurrence frequency of non-vowel ASCII characters and words for copyright protection of plain text. The embedding algorithm makes use of frequency non-vowel ASCII characters and words to generate a specialized author key. The extraction algorithm uses this key to extract watermark, hence identify the original copyright owner. Experimental results illustrate the effectiveness of the proposed algorithm on text encountering meaning preserving attacks performed by five independent attackers.Keywords: Copyright protection, Digital watermarking, Document authentication, Information security, Watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21601460 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of an Ultra-High-Speed Image Sensor by Dimensional Analysis
Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi
Abstract:
We present an explicit expression to estimate driving voltage attenuation through RC networks representation of an ultrahigh- speed image sensor. Elmore delay metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE simulation data, we found a simple expression that significantly improves the accuracy of the approximation. Estimation error of the resultant expression for uniform RC networks is less than 2%. Similarly, another simple closed-form model to estimate 50 % delay through fundamental RC networks is also derived with sufficient accuracy. The framework of this analysis can be extended to address delay or attenuation issues of other VLSI structures.
Keywords: Dimensional Analysis, Elmore model, RC network, Signal Attenuation, Ultra-High-Speed Image Sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14241459 Feasibility of the Evolutionary Algorithm using Different Behaviours of the Mutation Rate to Design Simple Digital Logic Circuits
Authors: Konstantin Movsovic, Emanuele Stomeo, Tatiana Kalganova
Abstract:
The evolutionary design of electronic circuits, or evolvable hardware, is a discipline that allows the user to automatically obtain the desired circuit design. The circuit configuration is under the control of evolutionary algorithms. Several researchers have used evolvable hardware to design electrical circuits. Every time that one particular algorithm is selected to carry out the evolution, it is necessary that all its parameters, such as mutation rate, population size, selection mechanisms etc. are tuned in order to achieve the best results during the evolution process. This paper investigates the abilities of evolution strategy to evolve digital logic circuits based on programmable logic array structures when different mutation rates are used. Several mutation rates (fixed and variable) are analyzed and compared with each other to outline the most appropriate choice to be used during the evolution of combinational logic circuits. The experimental results outlined in this paper are important as they could be used by every researcher who might need to use the evolutionary algorithm to design digital logic circuits.Keywords: Evolvable hardware, evolutionary algorithm, digitallogic circuit, mutation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15021458 ROI Based Embedded Watermarking of Medical Images for Secured Communication in Telemedicine
Authors: Baisa L. Gunjal, Suresh N. Mali
Abstract:
Medical images require special safety and confidentiality because critical judgment is done on the information provided by medical images. Transmission of medical image via internet or mobile phones demands strong security and copyright protection in telemedicine applications. Here, highly secured and robust watermarking technique is proposed for transmission of image data via internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest (RONI) of medical image are separated. Only RONI is used for watermark embedding. This technique results in exact recovery of watermark with standard medical database images of size 512x512, giving 'correlation factor' equals to 1. The correlation factor for different attacks like noise addition, filtering, rotation and compression ranges from 0.90 to 0.95. The PSNR with weighting factor 0.02 is up to 48.53 dBs. The presented scheme is non blind and embeds hospital logo of 64x64 size.
Keywords: Compression, DWT, ROI, Scrambling, Vertices
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32821457 A Novel Multiresolution based Optimization Scheme for Robust Affine Parameter Estimation
Authors: J.Dinesh Peter
Abstract:
This paper describes a new method for affine parameter estimation between image sequences. Usually, the parameter estimation techniques can be done by least squares in a quadratic way. However, this technique can be sensitive to the presence of outliers. Therefore, parameter estimation techniques for various image processing applications are robust enough to withstand the influence of outliers. Progressively, some robust estimation functions demanding non-quadratic and perhaps non-convex potentials adopted from statistics literature have been used for solving these. Addressing the optimization of the error function in a factual framework for finding a global optimal solution, the minimization can begin with the convex estimator at the coarser level and gradually introduce nonconvexity i.e., from soft to hard redescending non-convex estimators when the iteration reaches finer level of multiresolution pyramid. Comparison has been made to find the performance of the results of proposed method with the results found individually using two different estimators.Keywords: Image Processing, Affine parameter estimation, Outliers, Robust Statistics, Robust M-estimators
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14541456 Empirical Survey of the Solar System Based on the Fusion of GPS and Image Processing
Authors: S. Divya Gnanarathinam, S. Sundaramurthy
Abstract:
The tremendous increase in the population of the world creates the immediate need for the energy resources. All the people in the world need the sustainable energy resources which have low costs. Solar energy is appraised as one of the main energy resources in warm countries. The areas in the west of India like Rajasthan, Gujarat, etc. are immensely rich in solar energy resources. This paper deals with the development of dual axis solar tracker using Arduino board. Depending on the astronomical estimates of the sun from the GPS and sensor image processing outcomes, a methodology is proposed to locate the position of the sun to obtain the maximum solar energy. Based on the outcomes, the solar tracking system figures out whether to use image processing outcomes or astronomical estimates to attain the maximum efficiency of the solar panel. Finally, the experimental values obtained from the solar tracker for both the sunny and the rainy days are being tabulated.
Keywords: Dual axis solar tracker, Arduino board, LDR sensors, global positioning system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15881455 High Dynamic Range Resampling for Software Radio
Authors: Arthur David Snider, Laiq Azam
Abstract:
The classic problem of recovering arbitrary values of a band-limited signal from its samples has an added complication in software radio applications; namely, the resampling calculations inevitably fold aliases of the analog signal back into the original bandwidth. The phenomenon is quantified by the spur-free dynamic range. We demonstrate how a novel application of the Remez (Parks- McClellan) algorithm permits optimal signal recovery and SFDR, far surpassing state-of-the-art resamplers.Keywords: Sampling methods, Signal sampling, Digital radio, Digital-analog conversion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14061454 Lunar Rover Virtual Simulation System with Autonomous Navigation
Authors: Bao Jinsong, Hu Xiaofeng, Wang Wei, Yu Dili, Jin Ye
Abstract:
The paper researched and presented a virtual simulation system based on a full-digital lunar terrain, integrated with kinematics and dynamics module as well as autonomous navigation simulation module. The system simulation models are established. Enabling technologies such as digital lunar surface module, kinematics and dynamics simulation, Autonomous navigation are investigated. A prototype system for lunar rover locomotion simulation is developed based on these technologies. Autonomous navigation is a key echnology in lunar rover system, but rarely involved in virtual simulation system. An autonomous navigation simulation module have been integrated in this prototype system, which was proved by the simulation results that the synthetic simulation and visualizing analysis system are established in the system, and the system can provide efficient support for research on the autonomous navigation of lunar rover.
Keywords: Lunar rover, virtual simulation, autonomous navigation, full-digital lunar terrain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19281453 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion
Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein
Abstract:
Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24911452 Beta-spline Surface Fitting to Multi-slice Images
Authors: Normi Abdul Hadi, Arsmah Ibrahim, Fatimah Yahya, Jamaludin Md. Ali
Abstract:
Beta-spline is built on G2 continuity which guarantees smoothness of generated curves and surfaces using it. This curve is preferred to be used in object design rather than reconstruction. This study however, employs the Beta-spline in reconstructing a 3- dimensional G2 image of the Stanford Rabbit. The original data consists of multi-slice binary images of the rabbit. The result is then compared with related works using other techniques.Keywords: Beta-spline, multi-slice image, rectangular surface, 3D reconstruction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18821451 Inefficiency of Data Storing in Physical Memory
Authors: Kamaruddin Malik Mohamad, Sapiee Haji Jamel, Mustafa Mat Deris
Abstract:
Memory forensic is important in digital investigation. The forensic is based on the data stored in physical memory that involve memory management and processing time. However, the current forensic tools do not consider the efficiency in terms of storage management and the processing time. This paper shows the high redundancy of data found in the physical memory that cause inefficiency in processing time and memory management. The experiment is done using Borland C compiler on Windows XP with 512 MB of physical memory.Keywords: Digital Evidence, Memory Forensics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20191450 An Automated Method to Segment and Classify Masses in Mammograms
Authors: Viet Dzung Nguyen, Duc Thuan Nguyen, Tien Dzung Nguyen, Van Thanh Pham
Abstract:
Mammography is the most effective procedure for an early diagnosis of the breast cancer. Nowadays, people are trying to find a way or method to support as much as possible to the radiologists in diagnosis process. The most popular way is now being developed is using Computer-Aided Detection (CAD) system to process the digital mammograms and prompt the suspicious region to radiologist. In this paper, an automated CAD system for detection and classification of massive lesions in mammographic images is presented. The system consists of three processing steps: Regions-Of- Interest detection, feature extraction and classification. Our CAD system was evaluated on Mini-MIAS database consisting 322 digitalized mammograms. The CAD system-s performance is evaluated using Receiver Operating Characteristics (ROC) and Freeresponse ROC (FROC) curves. The archived results are 3.47 false positives per image (FPpI) and sensitivity of 85%.Keywords: classification, computer-aided detection, featureextraction, mass detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16571449 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels
Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal
Abstract:
Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18541448 Medical Image Segmentation Using Deformable Model and Local Fitting Binary: Thoracic Aorta
Authors: B. Bagheri Nakhjavanlo, T. S. Ellis, P.Raoofi, Sh.ziari
Abstract:
This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic is the need to overcome problems associated with intensity inhomogeneities. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the level set formulation aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets, and are shown to be more effective than other approaches in coping with intensity inhomogeneities. We have applied the Courant Friedrichs Levy (CFL) condition as stability criterion for our algorithm.Keywords: Image segmentation, Level-sets, Local fitting binary, Thoracic aorta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14561447 Kalman-s Shrinkage for Wavelet-Based Despeckling of SAR Images
Authors: Mario Mastriani, Alberto E. Giraldez
Abstract:
In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman-s filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman-s filter to the high subbands, and reconstruct a despeckled image from the modified detail coefficients. Experimental results demonstrate that our method compares favorably to several other despeckling methods on test synthetic aperture radar (SAR) images.Keywords: Kalman's filter, shrinkage, speckle, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16061446 A New Automatic System of Cell Colony Counting
Authors: U. Bottigli, M.Carpinelli, P.L. Fiori, B. Golosio, A. Marras, G. L. Masala, P. Oliva
Abstract:
The counting process of cell colonies is always a long and laborious process that is dependent on the judgment and ability of the operator. The judgment of the operator in counting can vary in relation to fatigue. Moreover, since this activity is time consuming it can limit the usable number of dishes for each experiment. For these purposes, it is necessary that an automatic system of cell colony counting is used. This article introduces a new automatic system of counting based on the elaboration of the digital images of cellular colonies grown on petri dishes. This system is mainly based on the algorithms of region-growing for the recognition of the regions of interest (ROI) in the image and a Sanger neural net for the characterization of such regions. The better final classification is supplied from a Feed-Forward Neural Net (FF-NN) and confronted with the K-Nearest Neighbour (K-NN) and a Linear Discriminative Function (LDF). The preliminary results are shown.Keywords: Automatic cell counting, neural network, region growing, Sanger net.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14611445 Support Vector Machine Prediction Model of Early-stage Lung Cancer Based on Curvelet Transform to Extract Texture Features of CT Image
Authors: Guo Xiuhua, Sun Tao, Wu Haifeng, He Wen, Liang Zhigang, Zhang Mengxia, Guo Aimin, Wang Wei
Abstract:
Purpose: To explore the use of Curvelet transform to extract texture features of pulmonary nodules in CT image and support vector machine to establish prediction model of small solitary pulmonary nodules in order to promote the ratio of detection and diagnosis of early-stage lung cancer. Methods: 2461 benign or malignant small solitary pulmonary nodules in CT image from 129 patients were collected. Fourteen Curvelet transform textural features were as parameters to establish support vector machine prediction model. Results: Compared with other methods, using 252 texture features as parameters to establish prediction model is more proper. And the classification consistency, sensitivity and specificity for the model are 81.5%, 93.8% and 38.0% respectively. Conclusion: Based on texture features extracted from Curvelet transform, support vector machine prediction model is sensitive to lung cancer, which can promote the rate of diagnosis for early-stage lung cancer to some extent.Keywords: CT image, Curvelet transform, Small pulmonary nodules, Support vector machines, Texture extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27661444 On Combining Support Vector Machines and Fuzzy K-Means in Vision-based Precision Agriculture
Authors: A. Tellaeche, X. P. Burgos-Artizzu, G. Pajares, A. Ribeiro
Abstract:
One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.Keywords: Fuzzy k-Means, Precision agriculture, SupportVectors Machines, Weed detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17791443 Evaluation of Robust Feature Descriptors for Texture Classification
Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo
Abstract:
Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.Keywords: Texture classification, texture descriptor, SIFT, SURF, ORB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16011442 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821