Search results for: Image complexity
2181 Digital Image Encryption Scheme using Chaotic Sequences with a Nonlinear Function
Abstract:
In this study, a system of encryption based on chaotic sequences is described. The system is used for encrypting digital image data for the purpose of secure image transmission. An image secure communication scheme based on Logistic map chaotic sequences with a nonlinear function is proposed in this paper. Encryption and decryption keys are obtained by one-dimensional Logistic map that generates secret key for the input of the nonlinear function. Receiver can recover the information using the received signal and identical key sequences through the inverse system technique. The results of computer simulations indicate that the transmitted source image can be correctly and reliably recovered by using proposed scheme even under the noisy channel. The performance of the system will be discussed through evaluating the quality of recovered image with and without channel noise.Keywords: Digital image, Image encryption, Secure communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22372180 A Survey on Principal Aspects of Secure Image Transmission
Authors: Ali Soleymani, Zulkarnain Md Ali, Md Jan Nordin
Abstract:
This paper is a review on the aspects and approaches of design an image cryptosystem. First a general introduction given for cryptography and images encryption and followed by different techniques in image encryption and related works for each technique surveyed. Finally, general security analysis methods for encrypted images are mentioned.
Keywords: Image, cryptography, encryption, security, analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23832179 Image Restoration in Non-Linear Filtering Domain using MDB approach
Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, C. Ardil
Abstract:
This paper proposes a new technique based on nonlinear Minmax Detector Based (MDB) filter for image restoration. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Image degradation can be due to the addition of different types of noise in the original image. Image noise can be modeled of many types and impulse noise is one of them. Impulse noise generates pixels with gray value not consistent with their local neighborhood. It appears as a sprinkle of both light and dark or only light spots in the image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly efficient but for large window and in case of high noise it gives rise to more blurring to image. The Centre Weighted Mean (CWM) filter has got a better average performance over the median filter. However the original pixel corrupted and noise reduction is substantial under high noise condition. Hence this technique has also blurring affect on the image. To illustrate the superiority of the proposed approach, the proposed new scheme has been simulated along with the standard ones and various restored performance measures have been compared.
Keywords: Filtering, Minmax Detector Based (MDB), noise, centre weighted mean filter, PSNR, restoration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27392178 Efficient and Effective Gabor Feature Representation for Face Detection
Authors: Yasuomi D. Sato, Yasutaka Kuriya
Abstract:
We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.
Keywords: Face detection, Gabor wavelet based pyramid, elastic graph matching, topological preservation, redundancy of computational complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18732177 Digital Image Forensics: Discovering the History of Digital Images
Authors: Gurinder Singh, Kulbir Singh
Abstract:
Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.
Keywords: Computer forensics, multimedia forensics, image ballistics, camera source identification, forgery detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18152176 Gray Level Image Encryption
Authors: Roza Afarin, Saeed Mozaffari
Abstract:
The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.
Keywords: Correlation coefficients, Genetic algorithm, Image encryption, Image entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22372175 FPGA Hardware Implementation and Evaluation of a Micro-Network Architecture for Multi-Core Systems
Authors: Yahia Salah, Med Lassaad Kaddachi, Rached Tourki
Abstract:
This paper presents the design, implementation and evaluation of a micro-network, or Network-on-Chip (NoC), based on a generic pipeline router architecture. The router is designed to efficiently support traffic generated by multimedia applications on embedded multi-core systems. It employs a simplest routing mechanism and implements the round-robin scheduling strategy to resolve output port contentions and minimize latency. A virtual channel flow control is applied to avoid the head-of-line blocking problem and enhance performance in the NoC. The hardware design of the router architecture has been implemented at the register transfer level; its functionality is evaluated in the case of the two dimensional Mesh/Torus topology, and performance results are derived from ModelSim simulator and Xilinx ISE 9.2i synthesis tool. An example of a multi-core image processing system utilizing the NoC structure has been implemented and validated to demonstrate the capability of the proposed micro-network architecture. To reduce complexity of the image compression and decompression architecture, the system use image processing algorithm based on classical discrete cosine transform with an efficient zonal processing approach. The experimental results have confirmed that both the proposed image compression scheme and NoC architecture can achieve a reasonable image quality with lower processing time.
Keywords: Generic Pipeline Network-on-Chip Router Architecture, JPEG Image Compression, FPGA Hardware Implementation, Performance Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30962174 Improving Image Quality in Remote Sensing Satellites using Channel Coding
Authors: H. M. Behairy, M. S. Khorsheed
Abstract:
Among other factors that characterize satellite communication channels is their high bit error rate. We present a system for still image transmission over noisy satellite channels. The system couples image compression together with error control codes to improve the received image quality while maintaining its bandwidth requirements. The proposed system is tested using a high resolution satellite imagery simulated over the Rician fading channel. Evaluation results show improvement in overall system including image quality and bandwidth requirements compared to similar systems with different coding schemes.Keywords: Image Transmission, Image Compression, Channel Coding, Error-Control Coding, DCT, Convolution Codes, Viterbi Algorithm, PCGC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18552173 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Keywords: Steganography, watermarking, private keys, time complexity measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8142172 Better Perception of Low Resolution Images Using Wavelet Interpolation Techniques
Authors: Tarun Gulati, Kapil Gupta, Dushyant Gupta
Abstract:
High resolution images are always desired as they contain the more information and they can better represent the original data. So, to convert the low resolution image into high resolution interpolation is done. The quality of such high resolution image depends on the interpolation function and is assessed in terms of sharpness of image. This paper focuses on Wavelet based Interpolation Techniques in which an input image is divided into subbands. Each subband is processed separately and finally combined the processed subbandsto get the super resolution image.
Keywords: SWT, DWTSR, DWTSWT, DWCWT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21712171 Block-Based 2D to 3D Image Conversion Method
Authors: S. Sowmyayani, V. Murugan
Abstract:
With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.
Keywords: Depth map, 3D image warping, image rendering, bilateral filter, minimum spanning tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3582170 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography
Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway
Abstract:
This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.
Keywords: Steganography, stego, LSB, crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15492169 A Dual Digital-Image Watermarking Technique
Authors: Maha Sharkas, Dahlia ElShafie, Nadder Hamdy
Abstract:
Image watermarking has become an important tool for intellectual property protection and authentication. In this paper a watermarking technique is suggested that incorporates two watermarks in a host image for improved protection and robustness. A watermark, in form of a PN sequence (will be called the secondary watermark), is embedded in the wavelet domain of a primary watermark before being embedded in the host image. The technique has been tested using Lena image as a host and the camera man as the primary watermark. The embedded PN sequence was detectable through correlation among other five sequences where a PSNR of 44.1065 dB was measured. Furthermore, to test the robustness of the technique, the watermarked image was exposed to four types of attacks, namely compression, low pass filtering, salt and pepper noise and luminance change. In all cases the secondary watermark was easy to detect even when the primary one is severely distorted.Keywords: DWT, Image watermarking, watermarkingtechniques, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27092168 Evaluation of Wavelet Filters for Image Compression
Authors: G. Sadashivappa, K. V. S. AnandaBabu
Abstract:
The aim of this paper to characterize a larger set of wavelet functions for implementation in a still image compression system using SPIHT algorithm. This paper discusses important features of wavelet functions and filters used in sub band coding to convert image into wavelet coefficients in MATLAB. Image quality is measured objectively using peak signal to noise ratio (PSNR) and its variation with bit rate (bpp). The effect of different parameters is studied on different wavelet functions. Our results provide a good reference for application designers of wavelet based coder.Keywords: Wavelet, image compression, sub band, SPIHT, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22242167 Feature Level Fusion of Multimodal Images Using Haar Lifting Wavelet Transform
Authors: Sudipta Majumdar, Jayant Bharadwaj
Abstract:
This paper presents feature level image fusion using Haar lifting wavelet transform. Feature fused is edge and boundary information, which is obtained using wavelet transform modulus maxima criteria. Simulation results show the superiority of the result as entropy, gradient, standard deviation are increased for fused image as compared to input images. The proposed methods have the advantages of simplicity of implementation, fast algorithm, perfect reconstruction, and reduced computational complexity. (Computational cost of Haar wavelet is very small as compared to other lifting wavelets.)
Keywords: Lifting wavelet transform, wavelet transform modulus maxima.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24212166 Data Embedding Based on Better Use of Bits in Image Pixels
Authors: Rehab H. Alwan, Fadhil J. Kadhim, Ahmad T. Al-Taani
Abstract:
In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Keywords: Image embedding, Edge detection, gray level connectivity, information hiding, digital image compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21442165 Improvement of Blood Detection Accuracy using Image Processing Techniques suitable for Capsule Endoscopy
Authors: Yong-Gyu Lee, Gilwon Yoon
Abstract:
Bleeding in the digestive duct is an important diagnostic parameter for patients. Blood in the endoscopic image can be determined by investigating the color tone of blood due to the degree of oxygenation, under- or over- illumination, food debris and secretions, etc. However, we found that how to pre-process raw images obtained from the capsule detectors was very important. We applied various image process methods suitable for the capsule endoscopic image in order to remove noises and unbalanced sensitivities for the image pixels. The results showed that much improvement was achieved by additional pre-processing techniques on the algorithm of determining bleeding areas.
Keywords: blood detection, capsule endoscopy, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18902164 Blind Image Deconvolution by Neural Recursive Function Approximation
Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu
Abstract:
This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20602163 Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking
Authors: Siraa Ben Ftima, Mourad Talbi, Tahar Ezzedine
Abstract:
In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.Keywords: Color image, grayscale image, singular values decomposition, lifting wavelet transform, image watermarking, watermark, secure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10272162 Secure E-Pay System Using Steganography and Visual Cryptography
Authors: K. Suganya Devi, P. Srinivasan, M. P. Vaishnave, G. Arutperumjothi
Abstract:
Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.
Keywords: Image security, random LSB, steganography, visual cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13842161 Definition, Structure and Core Functions of the State Image
Authors: Rosa Nurtazina, Yerkebulan Zhumashov, Maral Tomanova
Abstract:
Humanity is entering an era when "virtual reality" as the image of the world created by the media with the help of the Internet does not match the reality in many respects, when new communication technologies create a fundamentally different and previously unknown "global space". According to these technologies, the state begins to change the basic technology of political communication of the state and society, the state and the state. Nowadays image of the state becomes the most important tool and technology.
Image is a purposefully created image granting political object (person, organization, country, etc.) certain social and political values and promoting more emotional perception.
Political image of the state plays an important role in international relations. The success of the country's foreign policy, development of trade and economic relations with other countries depends on whether it is positive or negative. Foreign policy image has an impact on political processes taking place in the state: the negative image of the country's can be used by opposition forces as one of the arguments to criticize the government and its policies.
Keywords: Image of the country, country's image classification, function of the country image, country's image components.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37502160 Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise
Authors: Yasser F. Hassan
Abstract:
The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms.
Keywords: Rough Sets, Rough Neural Networks, Cellular Automata, Image Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19472159 Detecting Circles in Image Using Statistical Image Analysis
Authors: Fathi M. O. Hamed, Salma F. Elkofhaifee
Abstract:
The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data.Keywords: Image processing, median filter, projection, scalespace, segmentation, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18322158 FPGA Implement of a Vision Based Lane Departure Warning System
Authors: Yu Ren Lin, Yi Feng Su
Abstract:
Using vision based solution in intelligent vehicle application often needs large memory to handle video stream and image process which increase complexity of hardware and software. In this paper, we present a FPGA implement of a vision based lane departure warning system. By taking frame of videos, the line gradient of line is estimated and the lane marks are found. By analysis the position of lane mark, departure of vehicle will be detected in time. This idea has been implemented in Xilinx Spartan6 FPGA. The lane departure warning system used 39% logic resources and no memory of the device. The average availability is 92.5%. The frame rate is more than 30 frames per second (fps).
Keywords: Lane departure warning system, image, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20742157 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform
Authors: Shimal Das, Dibyendu Ghoshal
Abstract:
Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.
Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15682156 Extended Constraint Mask Based One-Bit Transform for Low-Complexity Fast Motion Estimation
Authors: Oğuzhan Urhan
Abstract:
In this paper, an improved motion estimation (ME) approach based on weighted constrained one-bit transform is proposed for block-based ME employed in video encoders. Binary ME approaches utilize low bit-depth representation of the original image frames with a Boolean exclusive-OR based hardware efficient matching criterion to decrease computational burden of the ME stage. Weighted constrained one-bit transform (WC‑1BT) based approach improves the performance of conventional C-1BT based ME employing 2-bit depth constraint mask instead of a 1-bit depth mask. In this work, the range of constraint mask is further extended to increase ME performance of WC-1BT approach. Experiments reveal that the proposed method provides better ME accuracy compared existing similar ME methods in the literature.
Keywords: Fast motion estimation, low-complexity motion estimation, video coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8532155 Application of Digital Image Correlation Technique on Vacuum Assisted Resin Transfer Molding Process and Performance Evaluation of the Produced Materials
Authors: Dingding Chen, Kazuo Arakawa, Masakazu Uchino, Changheng Xu
Abstract:
Vacuum assisted resin transfer moulding (VARTM) is a promising manufacture process for making large and complex fiber reinforced composite structures. However, the complexity of the flow of the resin in the infusion stage usually leads to nonuniform property distribution of the produced composite part. In order to control the flow of the resin, the situation of flow should be mastered. For the safety of the usage of the produced composite in practice, the understanding of the property distribution is essential. In this paper, we did some trials on monitoring the resin infusion stage and evaluation for the fiber volume fraction distribution of the VARTM produced composite using the digital image correlation methods. The results showthat3D-DIC is valid on monitoring the resin infusion stage and it is possible to use 2D-DIC to estimate the distribution of the fiber volume fraction on a FRP plate.
Keywords: Digital image correlation, VARTM, FRP, fiber volume fraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24342154 Selective Intra Prediction Mode Decision for H.264/AVC Encoders
Authors: Jun Sung Park, Hyo Jung Song
Abstract:
H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards such as MPEG-2, but computational complexity is increased significantly. In this paper, we propose selective mode decision schemes for fast intra prediction mode selection. The objective is to reduce the computational complexity of the H.264/AVC encoder without significant rate-distortion performance degradation. In our proposed schemes, the intra prediction complexity is reduced by limiting the luma and chroma prediction modes using the directional information of the 16×16 prediction mode. Experimental results are presented to show that the proposed schemes reduce the complexity by up to 78% maintaining the similar PSNR quality with about 1.46% bit rate increase in average.Keywords: Video encoding, H.264, Intra prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34662153 Use of Fuzzy Edge Image in Block Truncation Coding for Image Compression
Authors: Amarunnishad T.M., Govindan V.K., Abraham T. Mathew
Abstract:
An image compression method has been developed using fuzzy edge image utilizing the basic Block Truncation Coding (BTC) algorithm. The fuzzy edge image has been validated with classical edge detectors on the basis of the results of the well-known Canny edge detector prior to applying to the proposed method. The bit plane generated by the conventional BTC method is replaced with the fuzzy bit plane generated by the logical OR operation between the fuzzy edge image and the corresponding conventional BTC bit plane. The input image is encoded with the block mean and standard deviation and the fuzzy bit plane. The proposed method has been tested with test images of 8 bits/pixel and size 512×512 and found to be superior with better Peak Signal to Noise Ratio (PSNR) when compared to the conventional BTC, and adaptive bit plane selection BTC (ABTC) methods. The raggedness and jagged appearance, and the ringing artifacts at sharp edges are greatly reduced in reconstructed images by the proposed method with the fuzzy bit plane.Keywords: Image compression, Edge detection, Ground truth image, Peak signal to noise ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16982152 A Normalization-based Robust Watermarking Scheme Using Zernike Moments
Authors: Say Wei Foo, Qi Dong
Abstract:
Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.
Keywords: Image watermarking, Image normalization, Zernike moments, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754