Search results for: color segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 725

Search results for: color segmentation

485 Visual Object Tracking in 3D with Color Based Particle Filter

Authors: Pablo Barrera, Jose M. Canas, Vicente Matellan

Abstract:

This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.

Keywords: Monte Carlo sampling, multiple view, particle filters, visual tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
484 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: Steganography, watermarking, private keys, time complexity measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
483 Automatic Vehicle Identification by Plate Recognition

Authors: Serkan Ozbay, Ergun Ercelebi

Abstract:

Automatic Vehicle Identification (AVI) has many applications in traffic systems (highway electronic toll collection, red light violation enforcement, border and customs checkpoints, etc.). License Plate Recognition is an effective form of AVI systems. In this study, a smart and simple algorithm is presented for vehicle-s license plate recognition system. The proposed algorithm consists of three major parts: Extraction of plate region, segmentation of characters and recognition of plate characters. For extracting the plate region, edge detection algorithms and smearing algorithms are used. In segmentation part, smearing algorithms, filtering and some morphological algorithms are used. And finally statistical based template matching is used for recognition of plate characters. The performance of the proposed algorithm has been tested on real images. Based on the experimental results, we noted that our algorithm shows superior performance in car license plate recognition.

Keywords: Character recognizer, license plate recognition, plate region extraction, segmentation, smearing, template matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7539
482 An Optical Flow Based Segmentation Method for Objects Extraction

Authors: C. Lodato, S. Lopes

Abstract:

This paper describes a segmentation algorithm based on the cooperation of an optical flow estimation method with edge detection and region growing procedures. The proposed method has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. The addressed problem consists in extracting whole objects from background for producing images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The first task of the algorithm exploits the cues from motion analysis for moving area detection. Objects and background are then refined using respectively edge detection and region growing procedures. These tasks are iteratively performed until objects and background are completely resolved. The developed method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Motion Detection, Object Extraction, Optical Flow, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
481 Ultra High Speed Approach for Document Skew Detection and Correction Based On Centre of Gravity

Authors: Seyyed Yasser Hashemi

Abstract:

Skew detection and correction (SDC) has a direct effect in efficiency and exactitude of documents’ segmentation and analysis and thus is considered as a very important step in documents’ analysis field. Skew is a major problem in documents’ analysis for every language. For Arabic/Persian document scripts this problem is more severe because of special features of these languages. In this paper an efficient and fast algorithm for Document Skew Detection (DSD) based on the concept of segmentation and Center of Gravity (COG) is proposed. This algorithm is examined for 150 Arabic/Persian and English documents and SDC process are done successfully for 93 percent of documents with error rate of less than 1°. This algorithm shows better results for English documents compared to Arabic/Persian documents. The proposed method is also represents favorable results for handwritten, printed and also complicated documents such as newspapers and journals even with very low quality and resolution.

Keywords: Arabic/Persian document, Baseline, Centre of gravity, Document segmentation, Skew detection and correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857
480 A Design for Supply Chain Model by Integrated Evaluation of Design Value and Supply Chain Cost

Authors: Yuan-Jye Tseng, Jia-Shu Li

Abstract:

To design a product with the given product requirement and design objective, there can be alternative ways to propose the detailed design specifications of the product. In the design modeling stage, alternative design cases with detailed specifications can be modeled to fulfill the product requirement and design objective. Therefore, in the design evaluation stage, it is required to perform an evaluation of the alternative design cases for deciding the final design. The purpose of this research is to develop a product evaluation model for evaluating the alternative design cases by integrated evaluating the criteria of functional design, Kansei design, and design for supply chain. The criteria in the functional design group include primary function, expansion function, improved function, and new function. The criteria in the Kansei group include geometric shape, dimension, surface finish, and layout. The criteria in the design for supply chain group include material, manufacturing process, assembly, and supply chain operation. From the point of view of value and cost, the criteria in the functional design group and Kansei design group represent the design value of the product. The criteria in the design for supply chain group represent the supply chain and manufacturing cost of the product. It is required to evaluate the design value and the supply chain cost to determine the final design. For the purpose of evaluating the criteria in the three criteria groups, a fuzzy analytic network process (FANP) method is presented to evaluate a weighted index by calculating the total relational values among the three groups. A method using the technique for order preference by similarity to ideal solution (TOPSIS) is used to compare and rank the design alternative cases according to the weighted index using the total relational values of the criteria. The final decision of a design case can be determined by using the ordered ranking. For example, the design case with the top ranking can be selected as the final design case. Based on the criteria in the evaluation, the design objective can be achieved with a combined and weighted effect of the design value and manufacturing cost. An example product is demonstrated and illustrated in the presentation. It shows that the design evaluation model is useful for integrated evaluation of functional design, Kansei design, and design for supply chain to determine the best design case and achieve the design objective.

Keywords: Design evaluation, functional design, Kansei design, supply chain, design value, manufacturing cost, fuzzy analytic network process, technique for order preference by similarity to ideal solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
479 A New Hybrid RMN Image Segmentation Algorithm

Authors: Abdelouahab Moussaoui, Nabila Ferahta, Victor Chen

Abstract:

The development of aid's systems for the medical diagnosis is not easy thing because of presence of inhomogeneities in the MRI, the variability of the data from a sequence to the other as well as of other different source distortions that accentuate this difficulty. A new automatic, contextual, adaptive and robust segmentation procedure by MRI brain tissue classification is described in this article. A first phase consists in estimating the density of probability of the data by the Parzen-Rozenblatt method. The classification procedure is completely automatic and doesn't make any assumptions nor on the clusters number nor on the prototypes of these clusters since these last are detected in an automatic manner by an operator of mathematical morphology called skeleton by influence zones detection (SKIZ). The problem of initialization of the prototypes as well as their number is transformed in an optimization problem; in more the procedure is adaptive since it takes in consideration the contextual information presents in every voxel by an adaptive and robust non parametric model by the Markov fields (MF). The number of bad classifications is reduced by the use of the criteria of MPM minimization (Maximum Posterior Marginal).

Keywords: Clustering, Automatic Classification, SKIZ, MarkovFields, Image segmentation, Maximum Posterior Marginal (MPM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
478 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
477 The Truth about Good and Evil: A Mixed-Methods Approach to Color Theory

Authors: Raniya Alsharif

Abstract:

The color theory of good and evil is the association of colors to the omnipresent concept of good and evil, where human behavior and perception can be highly influenced by seeing black and white, making these connotations almost dangerously distinctive where they can be very hard to distinguish. This theory is a human construct that dates back to ancient Egypt and has been used since then in almost all forms of communication and expression, such as art, fashion, literature, and religious manuscripts, helping the implantation of preconceived ideas that influence behavior and society. This is a mixed-methods research that uses both surveys to collect quantitative data related to the theory and a vignette to collect qualitative data by using a scenario where participants aged between 18-25 will style two characters of good and bad characteristics with color contrasting clothes, both yielding results about the nature of the preconceived perceptions associated with ‘black and white’ and ‘good and evil’, illustrating the important role of media and communications in human behavior and subconscious, and also uncover how far this theory goes in the age of social media enlightenment.

Keywords: Color perception, interpretivism, thematic analysis, vignettes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
476 Watermarking Scheme for Color Images using Wavelet Transform based Texture Properties and Secret Sharing

Authors: Nagaraj V. Dharwadkar, B.B.Amberker

Abstract:

In this paper, a new secure watermarking scheme for color image is proposed. It splits the watermark into two shares using (2, 2)- threshold Visual Cryptography Scheme (V CS) with Adaptive Order Dithering technique and embeds one share into high textured subband of Luminance channel of the color image. The other share is used as the key and is available only with the super-user or the author of the image. In this scheme only the super-user can reveal the original watermark. The proposed scheme is dynamic in the sense that to maintain the perceptual similarity between the original and the watermarked image the selected subband coefficients are modified by varying the watermark scaling factor. The experimental results demonstrate the effectiveness of the proposed scheme. Further, the proposed scheme is able to resist all common attacks even with strong amplitude.

Keywords: VCS, Dithering, HVS, DWT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
475 Word Base Line Detection in Handwritten Text Recognition Systems

Authors: Kamil R. Aida-zade, Jamaladdin Z. Hasanov

Abstract:

An approach is offered for more precise definition of base lines- borders in handwritten cursive text and general problems of handwritten text segmentation have also been analyzed. An offered method tries to solve problems arose in handwritten recognition with specific slant or in other words, where the letters of the words are not on the same vertical line. As an informative features, some recognition systems use ascending and descending parts of the letters, found after the word-s baseline detection. In such recognition systems, problems in baseline detection, impacts the quality of the recognition and decreases the rate of the recognition. Despite other methods, here borders are found by small pieces containing segmentation elements and defined as a set of linear functions. In this method, separate borders for top and bottom border lines are found. At the end of the paper, as a result, azerbaijani cursive handwritten texts written in Latin alphabet by different authors has been analyzed.

Keywords: Azeri, azerbaijani, latin, segmentation, cursive, HWR, handwritten, recognition, baseline, ascender, descender, symbols.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2431
474 Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin

Authors: M.Ghotbi, M.Ghotbi, Gh. Azari Takami

Abstract:

In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from <20 (<1 mg/kg) to 23.33 (2.03 mg/kg) and 27.67 (5.74 mg/kg), respectively. Ultimately, a significant difference was seen between flesh carotenoid concentrations of fish feeding on astaxanthin containing treatments and control treatment (P<0.05). It should be mentioned that just raw fillet color of fish belonged to 80 (ppm) of astaxanthin treatment was seen to be close to color targets (SalmoFan scores) adopted for harvest-size fish.

Keywords: Astaxanthin, Flesh color, Rainbow trout, Vitaton, β- carotene,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3377
473 An Approach to Image Extraction and Accurate Skin Detection from Web Pages

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Tarek Abd-El-Hafeez

Abstract:

This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.

Keywords: Browser Helper Object, Color spaces, Image and URL extraction, Skin detection, Web Browser events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
472 Graph-based High Level Motion Segmentation using Normalized Cuts

Authors: Sungju Yun, Anjin Park, Keechul Jung

Abstract:

Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where on-line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of several repeated frames within temporal distances, we consider all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

Keywords: Capture Devices, High-Level Motion, Motion Segmentation, Normalized Cuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
471 A Similarity Function for Global Quality Assessment of Retinal Vessel Segmentations

Authors: Arturo Aquino, Manuel Emilio Gegundez, Jose Manuel Bravo, Diego Marin

Abstract:

Retinal vascularity assessment plays an important role in diagnosis of ophthalmic pathologies. The employment of digital images for this purpose makes possible a computerized approach and has motivated development of many methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating performance of these algorithms and, concretely, the accuracy has been mostly used as measure of global performance in this topic. However, this metric shows very poor matching with human perception as well as other notable deficiencies. Here, a new similarity function for measuring quality of retinal vessel segmentations is proposed. This similarity function is based on characterizing the vascular tree as a connected structure with a measurable area and length. Tests made indicate that this new approach shows better behaviour than the current one does. Generalizing, this concept of measuring descriptive properties may be used for designing functions for measuring more successfully segmentation quality of other complex structures.

Keywords: Retinal vessel segmentation, quality assessment, performanceevaluation, similarity function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
470 A General Segmentation Scheme for Contouring Kidney Region in Ultrasound Kidney Images using Improved Higher Order Spline Interpolation

Authors: K. Bommanna Raja, M.Madheswaran, K.Thyagarajah

Abstract:

A higher order spline interpolated contour obtained with up-sampling of homogenously distributed coordinates for segmentation of kidney region in different classes of ultrasound kidney images has been developed and presented in this paper. The performance of the proposed method is measured and compared with modified snake model contour, Markov random field contour and expert outlined contour. The validation of the method is made in correspondence with expert outlined contour using maximum coordinate distance, Hausdorff distance and mean radial distance metrics. The results obtained reveal that proposed scheme provides optimum contour that agrees well with expert outlined contour. Moreover this technique helps to preserve the pixels-of-interest which in specific defines the functional characteristic of kidney. This explores various possibilities in implementing computer-aided diagnosis system exclusively for US kidney images.

Keywords: Ultrasound Kidney Image – Kidney Segmentation –Active Contour – Markov Random Field – Higher Order SplineInterpolation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
469 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
468 Segmentation Problems and Solutions in Printed Degraded Gurmukhi Script

Authors: M. K. Jindal, G. S. Lehal, R. K. Sharma

Abstract:

Character segmentation is an important preprocessing step for text recognition. In degraded documents, existence of touching characters decreases recognition rate drastically, for any optical character recognition (OCR) system. In this paper we have proposed a complete solution for segmenting touching characters in all the three zones of printed Gurmukhi script. A study of touching Gurmukhi characters is carried out and these characters have been divided into various categories after a careful analysis. Structural properties of the Gurmukhi characters are used for defining the categories. New algorithms have been proposed to segment the touching characters in middle zone, upper zone and lower zone. These algorithms have shown a reasonable improvement in segmenting the touching characters in degraded printed Gurmukhi script. The algorithms proposed in this paper are applicable only to machine printed text. We have also discussed a new and useful technique to segment the horizontally overlapping lines.

Keywords: Character Segmentation, Middle Zone, Upper Zone, Lower Zone, Touching Characters, Horizontally Overlapping Lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
467 Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection

Authors: Jonghyun Park, Soonyoung Park, Sanggyun Kim, Wanhyun Cho, Sunworl Kim

Abstract:

In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.

Keywords: Region segmentation, tensor voting, image-based 3D, geometric structure, Gaussian Dirichlet process mixture model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
466 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
465 A Fuzzy Approach to Liver Tumor Segmentation with Zernike Moments

Authors: Abder-Rahman Ali, Antoine Vacavant, Manuel Grand-Brochier, Adélaïde Albouy-Kissi, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for liver lesions in regions of interest within MRI (Magnetic Resonance Imaging). This approach, based on a two-cluster Fuzzy CMeans methodology, considers the parameter variable compactness to handle uncertainty. Fine boundaries are detected by a local recursive merging of ambiguous pixels with a sequential forward floating selection with Zernike moments. The method has been tested on both synthetic and real images. When applied on synthetic images, the proposed approach provides good performance, segmentations obtained are accurate, their shape is consistent with the ground truth, and the extracted information is reliable. The results obtained on MR images confirm such observations. Our approach allows, even for difficult cases of MR images, to extract a segmentation with good performance in terms of accuracy and shape, which implies that the geometry of the tumor is preserved for further clinical activities (such as automatic extraction of pharmaco-kinetics properties, lesion characterization, etc.).

Keywords: Defuzzification, floating search, fuzzy clustering, Zernike moments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
464 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

Authors: C. Ardil

Abstract:

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 490
463 The More Organized Proof For Acyclic Coloring Of Graphs With Δ = 5 with 8 Colors

Authors: Ahmad Salehi

Abstract:

An acyclic coloring of a graph G is a coloring of its vertices such that:(i) no two neighbors in G are assigned the same color and (ii) no bicolored cycle can exist in G. The acyclic chromatic number of G is the least number of colors necessary to acyclically color G. Recently it has been proved that any graph of maximum degree 5 has an acyclic chromatic number at most 8. In this paper we present another proof for this result.

Keywords: Acyclic Coloring, Vertex coloring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
462 Total Chromatic Number of Δ-Claw-Free 3-Degenerated Graphs

Authors: Wongsakorn Charoenpanitseri

Abstract:

The total chromatic number χ"(G) of a graph G is the minimum number of colors needed to color the elements (vertices and edges) of G such that no incident or adjacent pair of elements receive the same color Let G be a graph with maximum degree Δ(G). Considering a total coloring of G and focusing on a vertex with maximum degree. A vertex with maximum degree needs a color and all Δ(G) edges incident to this vertex need more Δ(G) + 1 distinct colors. To color all vertices and all edges of G, it requires at least Δ(G) + 1 colors. That is, χ"(G) is at least Δ(G) + 1. However, no one can find a graph G with the total chromatic number which is greater than Δ(G) + 2. The Total Coloring Conjecture states that for every graph G, χ"(G) is at most Δ(G) + 2. In this paper, we prove that the Total Coloring Conjectur for a Δ-claw-free 3-degenerated graph. That is, we prove that the total chromatic number of every Δ-claw-free 3-degenerated graph is at most Δ(G) + 2.

Keywords: Total colorings, the total chromatic number, 3-degenerated.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 661
461 Augmented Reality Interaction System in 3D Environment

Authors: Sunhyoung Lee, Askar Akshabayev, Beisenbek Baisakov, Youngjoon Han, Hernsoo Hahn

Abstract:

It is important to give input information without other device in AR system. One solution is using hand for augmented reality application. Many researchers have proposed different solutions for hand interface in augmented reality. Analyze Histogram and connecting factor is can be example for that. Various Direction searching is one of robust way to recognition hand but it takes too much calculating time. And background should be distinguished with skin color. This paper proposes a hand tracking method to control the 3D object in augmented reality using depth device and skin color. Also in this work discussed relationship between several markers, which is based on relationship between camera and marker. One marker used for displaying virtual object and three markers for detecting hand gesture and manipulating the virtual object.

Keywords: Augmented Reality, depth map, hand recognition, kinect, marker, YCbCr color model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
460 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: Artificial neural network, ANN, brain tumor, computer-aided diagnostic, CAD system, gray-level co-occurrence matrix, GLCM, level set method, tumor segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
459 Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion

Authors: Bing-Fei Wu, Yen-Lin Chen, Chung-Cheng Chiu

Abstract:

In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1

Keywords: image segmentation, multilevel thresholding, clustering, discriminant analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
458 Character Segmentation Method for a License Plate with Topological Transform

Authors: Jaedo Kim, Youngjoon Han, Hernsoo Hahn

Abstract:

This paper propose the robust character segmentation method for license plate with topological transform such as twist,rotation. The first step of the proposed method is to find a candidate region for character and license plate. The character or license plate must be appeared as closed loop in the edge image. In the case of detecting candidate for character region, the evaluation of detected region is using topological relationship between each character. When this method decides license plate candidate region, character features in the region with binarization are used. After binarization for the detected candidate region, each character region is decided again. In this step, each character region is fitted more than previous step. In the next step, the method checks other character regions with different scale near the detected character regions, because most license plates have license numbers with some meaningful characters around them. The method uses perspective projection for geometrical normalization. If there is topological distortion in the character region, the method projects the region on a template which is defined as standard license plate using perspective projection. In this step, the method is able to separate each number region and small meaningful characters. The evaluation results are tested with a number of test images.

Keywords: License Plate Detection, Character Segmentation, Perspective Projection, Topological Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
457 Traditional Dyeing of Silk with Natural Dyes by Eco-Friendly Method

Authors: Samera Salimpour Abkenar

Abstract:

In traditional dyeing of natural fibers with natural dyes, metal salts are commonly used to increase color stability. This method always carries the risk of environmental pollution (contamination of arable soils and fresh groundwater) due to the release of dyeing effluents containing large amounts of metal. Therefore, researchers are always looking for new methods to obtain a green dyeing system. In this research, the use of the enzymatic dyeing method to prevent environmental pollution with metals and reduce production costs has been proposed. After degumming and bleaching, raw silk fabrics were dyed with natural dyes (Madder and Sumac) by three methods (pre-mordanting with a metal salt, one-step enzymatic dyeing, and two-step enzymatic dyeing). Results show that silk dyed with natural dyes by the enzymatic method has higher color strength and colorfastness than the pretreated with a metal salt. Also, the amount of remained dyes in the dyeing wastewater is significantly reduced by the enzymatic method. It is found that the enzymatic dyeing method leads to improvement of dye absorption, color strength, soft hand, no change in color shade, low production costs (due to low dyeing temperature), and a significant reduction in environmental pollution.

Keywords: Eco-friendly, natural dyes, silk, traditional dyeing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
456 Developing Marketing Strategy in Nonmetallic Mineral Industry at the Business Level

Authors: Nader Gharibnavaz, Naser Gharibnavaz

Abstract:

This study extends research on the relationship between marketing strategy and market segmentation by investigating on market segments in the cement industry. Competitive strength and rivals distance from the factory were used as business environment. A three segment (positive, neutral or indifferent and zero zones) were identified as strategic segments. For each segment a marketing strategy (aggressive, defensive and decline) were developed. This study employed data from cement industry to fulfill two objectives, the first is to give a framework to the segmentation of cement industry and the second is developing marketing strategy with varying competitive strength. Fifty six questionnaires containing close-and open-ended questions were collected and analyzed. Results supported the theory that segments tend to be more aggressive than defensive when competitive strength increases. It is concluded that high strength segments follow total market coverage, concentric diversification and frontal attack to their competitors. With decreased competitive strength, Business tends to follow multi-market strategy, product modification/improvement and flank attack to direct competitors for this kind of segments. Segments with weak competitive strength followed focus strategy and decline strategy.

Keywords: Marketing strategy, Competitive strength, Market Segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714