Search results for: Digital Fundus Image
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2281

Search results for: Digital Fundus Image

2131 Local Image Descriptor using VQ-SIFT for Image Retrieval

Authors: Qiu Chen, Feifei Lee, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we present local image descriptor using VQ-SIFT for more effective and efficient image retrieval. Instead of SIFT's weighted orientation histograms, we apply vector quantization (VQ) histogram as an alternate representation for SIFT features. Experimental results show that SIFT features using VQ-based local descriptors can achieve better image retrieval accuracy than the conventional algorithm while the computational cost is significantly reduced.

Keywords: SIFT feature, Vector quantization histogram, Localdescriptor, Image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
2130 Labeling Method in Steganography

Authors: H. Motameni, M. Norouzi, M. Jahandar, A. Hatami

Abstract:

In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).

Keywords: Binary image, labeling, low bit, neighborhood, RGB image, steganography, threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
2129 Histogram Slicing to Better Reveal Special Thermal Objects

Authors: S. Ratna Sulistiyanti, Adhi Susanto, Thomas Sri Widodo, Gede Bayu Suparta

Abstract:

In this paper, an experimentation to enhance the visibility of hot objects in a thermal image acquired with ordinary digital camera is reported, after the applications of lowpass and median filters to suppress the distracting granular noises. The common thresholding and slicing techniques were used on the histogram at different gray levels, followed by a subjective comparative evaluation. The best result came out with the threshold level 115 and the number of slices 3.

Keywords: enhance, thermal image, thresholding and slicingtechniques, granular noise, hot objects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
2128 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: Digital camera industry, product evolution trajectory, product platform, unification of competitive factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
2127 Image Segmentation and Contour Recognition Based on Mathematical Morphology

Authors: Pinaki Pratim Acharjya, Esha Dutta

Abstract:

In image segmentation contour detection is one of the important pre-processing steps in recent days. Contours characterize boundaries and contour detection is one of the most difficult tasks in image processing. Hence it is a problem of fundamental importance in image processing. Contour detection of an image decreases the volume of data considerably and useless information is removed, but the structural properties of the image remain same. In this research, a robust and effective contour detection technique has been proposed using mathematical morphology. Three different contour detection results are obtained by using morphological dilation and erosion. The comparative analyses of three different results also have been done.

Keywords: Image segmentation, contour detection, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
2126 Object Identification with Color, Texture, and Object-Correlation in CBIR System

Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali

Abstract:

Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.

Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
2125 Parallel Image Compression and Analysis with Wavelets

Authors: M. Kutila, J. Viitanen

Abstract:

This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.

Keywords: image compression, jpeg, wavelet, vlc

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
2124 Transform to Succeed: An Empirical Analysis of Digital Transformation in Firms

Authors: Sarah E. Stief, Anne Theresa Eidhoff, Markus Voeth

Abstract:

Despite all progress firms are facing the increasing need to adapt and assimilate digital technologies to transform their business activities in order to pursue business development. By using new digital technologies, firms can implement major business improvements in order to stay competitive and foster new growth potentials. The corresponding phenomenon of digital transformation has received some attention in previous literature in respect to industries such as media and publishing. Nevertheless, there is a lack of understanding of the concept and its organization within firms. With the help of twenty-three in-depth field interviews with German experts responsible for their company’s digital transformation, we examined what digital transformation encompasses, how it is organized and which opportunities and challenges arise within firms. Our results indicate that digital transformation is an inevitable task for all firms, as it bears the potential to comprehensively optimize and reshape established business activities and can thus be seen as a strategy of business development.

Keywords: Business development, digitalization, digital strategies, digital transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6329
2123 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
2122 Integral Image-Based Differential Filters

Authors: Kohei Inoue, Kenji Hara, Kiichi Urahama

Abstract:

We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.

Keywords: Integral images, differential images, differential filters, image fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
2121 Timescape-Based Panoramic View for Historic Landmarks

Authors: H. Ali, A. Whitehead

Abstract:

Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.

Keywords: Cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051
2120 Evaluating Content Based Image Retrieval Techniques with the One Million Images CLIC Test Bed

Authors: Pierre-Alain Moëllic, Patrick Hède, Gr egory Grefenstette, Christophe Millet

Abstract:

Pattern recognition and image recognition methods are commonly developed and tested using testbeds, which contain known responses to a query set. Until now, testbeds available for image analysis and content-based image retrieval (CBIR) have been scarce and small-scale. Here we present the one million images CEA-List Image Collection (CLIC) testbed that we have produced, and report on our use of this testbed to evaluate image analysis merging techniques. This testbed will soon be made publicly available through the EU MUSCLE Network of Excellence.

Keywords: CBIR, CLIC, evaluation, image indexing and retrieval, testbed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
2119 Applying Systematic Literature Review and Delphi Methods to Explore Digital Transformation Key Success Factors

Authors: Ayman El-Garem, Riham Adel

Abstract:

Digital transformation is about identifying the necessary changes of the entire business model, rethinking how to transform the traditional operations into digital ones that can create better value to its customers. Efforts are common across industries, but they often fail due to a lack of understanding of the factors required to focus on and change to be able to embark in digital transformation successfully. Further research is required to bridge the knowledge gap between academia and industry to support companies starting their digital transformation journey. To date there is no consensus on digital transformation key success factors. Therefore, the aim of this paper is to identify the success factors in digital transformation journey, throughout conducting a systematic literature review of 134 peer-reviewed articles to get better insights regarding the research progress in this field.  After completing the systematic literature review it will be followed by Delphi study to get experts consensus on the most significant factors affecting digital transformation implementation. The findings indicate that organizations undergoing digital transformation should focus mainly on (1) well managed digital transformation activities; (2) digital business strategy; (3) supportive culture; (4) top management support; (5) organizational change capabilities.

Keywords: Digital transformation, key success factors, literature review, Delphi study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 722
2118 Efficient CT Image Volume Rendering for Diagnosis

Authors: HaeNa Lee, Sun K. Yoo

Abstract:

Volume rendering is widely used in medical CT image visualization. Applying 3D image visualization to diagnosis application can require accurate volume rendering with high resolution. Interpolation is important in medical image processing applications such as image compression or volume resampling. However, it can distort the original image data because of edge blurring or blocking effects when image enhancement procedures were applied. In this paper, we proposed adaptive tension control method exploiting gradient information to achieve high resolution medical image enhancement in volume visualization, where restored images are similar to original images as much as possible. The experimental results show that the proposed method can improve image quality associated with the adaptive tension control efficacy.

Keywords: Tension control, Interpolation, Ray-casting, Medical imaging analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2372
2117 Digital Marketing Maturity Models: Overview and Comparison

Authors: Elina Bakhtieva

Abstract:

The variety of available digital tools, strategies and activities might confuse and disorient even an experienced marketer. This applies in particular to B2B companies, which are usually less flexible in uptaking of digital technology than B2C companies. B2B companies are lacking a framework that corresponds to the specifics of the B2B business, and which helps to evaluate a company’s capabilities and to choose an appropriate path. A B2B digital marketing maturity model helps to fill this gap. However, modern marketing offers no widely approved digital marketing maturity model, and thus, some marketing institutions provide their own tools. The purpose of this paper is building an optimized B2B digital marketing maturity model based on a SWOT (strengths, weaknesses, opportunities, and threats) analysis of existing models. The current study provides an analytical review of the existing digital marketing maturity models with open access. The results of the research are twofold. First, the provided SWOT analysis outlines the main advantages and disadvantages of existing models. Secondly, the strengths of existing digital marketing maturity models, helps to identify the main characteristics and the structure of an optimized B2B digital marketing maturity model. The research findings indicate that only one out of three analyzed models could be used as a separate tool. This study is among the first examining the use of maturity models in digital marketing. It helps businesses to choose between the existing digital marketing models, the most effective one. Moreover, it creates a base for future research on digital marketing maturity models. This study contributes to the emerging B2B digital marketing literature by providing a SWOT analysis of the existing digital marketing maturity models and suggesting a structure and main characteristics of an optimized B2B digital marketing maturity model.

Keywords: B2B digital marketing strategy, digital marketing, digital marketing maturity model, SWOT analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3320
2116 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality

Authors: Sang-Chul Kim, Sung-Il Chien

Abstract:

The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.

Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
2115 An Efficient Implementation of High Speed Vedic Multiplier Using Compressors for Image Processing Applications

Authors: Shobha Sharma, Amita Dev, Akanksha Kant

Abstract:

Digital signal processor, image signal processor and FIR filters have multipliers as an important part of their design. On the basis of Vedic mathematics, Vedic multipliers have come out to be very fast multipliers. One of the image processing applications is edge detection. This research presents a small area and high speed 8 bit Vedic multiplier system comprising of compressor based adders. This results in faster edge detection. This architecture is tested on Xilinx vertex 4 FPGA board and simulations were carried out using the Xilinx synthesis tool. Comparisons are made and this system is found to be smaller in area with high speed (the lesser propagation delay). This compressor based Vedic multiplier is 1.1 times speedier than a typical Vedic multiplier. Also, this Vedic Multiplier is 2 times speedier than a ‘simple’ multiplier.

Keywords: Detection of edges, Vedic multiplier, image processing, Urdhva Tiryakbhyam sutra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
2114 Factors Underlying the Digital Divide for Disabled People: Focus on a Korean Case Study

Authors: Soungwan Kim

Abstract:

This study identifies factors underlying the digital divide that is faced by the disabled. The results of its analysis showed that the digital divide in PC use is affected by age, number of years of education, employment status, and household income of more than KRW 3 million. The digital divide in smart device use is affected by sex, age, number of years of education, time when disability struck, and household income of more than KRW 3 million. Based on these results, this study proposes methods for bridging the digital divide faced by the disabled.

Keywords: Digital divide, digital divide for the disabled, information accessibility for pcs and smart devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
2113 An Improved Method to Watermark Images Sensitive to Blocking Artifacts

Authors: Afzel Noore

Abstract:

A new digital watermarking technique for images that are sensitive to blocking artifacts is presented. Experimental results show that the proposed MDCT based approach produces highly imperceptible watermarked images and is robust to attacks such as compression, noise, filtering and geometric transformations. The proposed MDCT watermarking technique is applied to fingerprints for ensuring security. The face image and demographic text data of an individual are used as multiple watermarks. An AFIS system was used to quantitatively evaluate the matching performance of the MDCT-based watermarked fingerprint. The high fingerprint matching scores show that the MDCT approach is resilient to blocking artifacts. The quality of the extracted face and extracted text images was computed using two human visual system metrics and the results show that the image quality was high.

Keywords: Digital watermarking, data hiding, modified discretecosine transformation (MDCT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
2112 A Novel Approach to Image Compression of Colour Images by Plane Reduction Technique

Authors: K.Sowmyan, A.Siddarth, D.Menaka

Abstract:

Several methods have been proposed for color image compression but the reconstructed image had very low signal to noise ratio which made it inefficient. This paper describes a lossy compression technique for color images which overcomes the drawbacks. The technique works on spatial domain where the pixel values of RGB planes of the input color image is mapped onto two dimensional planes. The proposed technique produced better results than JPEG2000, 2DPCA and a comparative study is reported based on the image quality measures such as PSNR and MSE.Experiments on real time images are shown that compare this methodology with previous ones and demonstrate its advantages.

Keywords: Color Image compression, spatial domain, planereduction, root mean square, image restoration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
2111 A Trends Analysis of Image Processing in Unmanned Aerial Vehicle

Authors: Jae-Neung Lee, Keun-Chang Kwak

Abstract:

This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.

Keywords: Image Processing, UAV, Quadcopter, Target detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7675
2110 Fuzzy Hyperbolization Image Enhancement and Artificial Neural Network for Anomaly Detection

Authors: Sri Hartati, 1Agus Harjoko, Brad G. Nickerson

Abstract:

A prototype of an anomaly detection system was developed to automate process of recognizing an anomaly of roentgen image by utilizing fuzzy histogram hyperbolization image enhancement and back propagation artificial neural network. The system consists of image acquisition, pre-processor, feature extractor, response selector and output. Fuzzy Histogram Hyperbolization is chosen to improve the quality of the roentgen image. The fuzzy histogram hyperbolization steps consist of fuzzyfication, modification of values of membership functions and defuzzyfication. Image features are extracted after the the quality of the image is improved. The extracted image features are input to the artificial neural network for detecting anomaly. The number of nodes in the proposed ANN layers was made small. Experimental results indicate that the fuzzy histogram hyperbolization method can be used to improve the quality of the image. The system is capable to detect the anomaly in the roentgen image.

Keywords: Image processing, artificial neural network, anomaly detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2113
2109 Biometric Steganography Using Variable Length Embedding

Authors: Souvik Bhattacharyya, Indradip Banerjee, Anumoy Chakraborty, Gautam Sanyal

Abstract:

Recent growth in digital multimedia technologies has presented a lot of facilities in information transmission, reproduction and manipulation. Therefore, the concept of information security is one of the superior articles in the present day situation. The biometric information security is one of the information security mechanisms. It has the advantages as well as disadvantages. The biometric system is at risk to a range of attacks. These attacks are anticipated to bypass the security system or to suspend the normal functioning. Various hazards have been discovered while using biometric system. Proper use of steganography greatly reduces the risks in biometric systems from the hackers. Steganography is one of the fashionable information hiding technique. The goal of steganography is to hide information inside a cover medium like text, image, audio, video etc. through which it is not possible to detect the existence of the secret information. Here in this paper a new security concept has been established by making the system more secure with the help of steganography along with biometric security. Here the biometric information has been embedded to a skin tone portion of an image with the help of proposed steganographic technique.

Keywords: Biometrics, Skin tone detection, Series, Polynomial, Cover Image, Stego Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2670
2108 Towards a Proof Acceptance by Overcoming Challenges in Collecting Digital Evidence

Authors: Lilian Noronha Nassif

Abstract:

Cybercrime investigation demands an appropriated evidence collection mechanism. If the investigator does not acquire digital proofs in a forensic sound, some important information can be lost, and judges can discard case evidence because the acquisition was inadequate. The correct digital forensic seizing involves preparation of professionals from fields of law, police, and computer science. This paper presents important challenges faced during evidence collection in different perspectives of places. The crime scene can be virtual or real, and technical obstacles and privacy concerns must be considered. All pointed challenges here highlight the precautions to be taken in the digital evidence collection and the suggested procedures contribute to the best practices in the digital forensics field.

Keywords: Digital evidence, digital forensic processes and procedures, mobile forensics, cloud forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217
2107 Evaluation of Manual and Automatic Calibration Methods for Digital Tachographs

Authors: Sarp Erturk, Levent Eyigel, Cihat Celik, Muhammet Sahinoglu, Serdar Ay, Yasin Kaya, Hasan Kaya

Abstract:

This paper presents a quantitative analysis on the need for automotive calibration methods for digital tachographs. Digital tachographs are mandatory for vehicles used in people and goods transport and they are an important aspect for road safety and inspection. Digital tachographs need to be calibrated for workshops in order for the digital tachograph to display and record speed and odometer values correctly. Calibration of digital tachographs can be performed either manual or automatic. It is shown in this paper that manual calibration of digital tachographs is prone to errors and there can be differences between manual and automatic calibration parameters. Therefore automatic calibration methods are imperative for digital tachograph calibration. The presented experimental results and error analysis clearly support the claims of the paper by evaluating and statistically comparing manual and automatic calibration methods.

Keywords: Digital tachograph, road safety, tachograph calibration, tachograph workshops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778
2106 On the Reduction of Side Effects in Tomography

Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan

Abstract:

As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.

Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
2105 Salient Points Reduction for Content-Based Image Retrieval

Authors: Yao-Hong Tsai

Abstract:

Salient points are frequently used to represent local properties of the image in content-based image retrieval. In this paper, we present a reduction algorithm that extracts the local most salient points such that they not only give a satisfying representation of an image, but also make the image retrieval process efficiently. This algorithm recursively reduces the continuous point set by their corresponding saliency values under a top-down approach. The resulting salient points are evaluated with an image retrieval system using Hausdoff distance. In this experiment, it shows that our method is robust and the extracted salient points provide better retrieval performance comparing with other point detectors.

Keywords: Barnard detector, Content-based image retrieval, Points reduction, Salient point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
2104 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
2103 Blur and Ringing Artifact Measurement in Image Compression using Wavelet Transform

Authors: Madhuri Khambete, Madhuri Joshi

Abstract:

Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.

Keywords: Blur, Compression, Objective Quality assessment, Ringing artifact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4853
2102 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632