Search results for: Color Image compression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2162

Search results for: Color Image compression

1952 Unequal Error Protection of Facial Features for Personal ID Images Coding

Authors: T. Hirner, J. Polec

Abstract:

This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.

Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
1951 Large Strain Compression-Tension Behavior of AZ31B Rolled Sheet in the Rolling Direction

Authors: A. Yazdanmehr, H. Jahed

Abstract:

Being made with the lightest commercially available industrial metal, Magnesium (Mg) alloys are of interest for light-weighting. Expanding their application to different material processing methods requires Mg properties at large strains. Several room-temperature processes such as shot and laser peening and hole cold expansion need compressive large strain data. Two methods have been proposed in the literature to obtain the stress-strain curve at high strains: 1) anti-buckling guides and 2) small cubic samples. In this paper, an anti-buckling fixture is used with the help of digital image correlation (DIC) to obtain the compression-tension (C-T) of AZ31B-H24 rolled sheet at large strain values of up to 10.5%. The effect of the anti-bucking fixture on stress-strain curves is evaluated experimentally by comparing the results with those of the compression tests of cubic samples. For testing cubic samples, a new fixture has been designed to increase the accuracy of testing cubic samples with DIC strain measurements. Results show a negligible effect of anti-buckling on stress-strain curves, specifically at high strain values.

Keywords: Large strain, compression-tension, loading-unloading, Mg alloys.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
1950 A New Approach to Steganography using Sinc-Convolution Method

Authors: Ahmad R. Naghsh-Nilchi, Latifeh Pourmohammadbagher

Abstract:

Both image steganography and image encryption have advantages and disadvantages. Steganograhy allows us to hide a desired image containing confidential information in a covered or host image while image encryption is decomposing the desired image to a non-readable, non-comprehended manner. The encryption methods are usually much more robust than the steganographic ones. However, they have a high visibility and would provoke the attackers easily since it usually is obvious from an encrypted image that something is hidden! The combination of steganography and encryption will cover both of their weaknesses and therefore, it increases the security. In this paper an image encryption method based on sinc-convolution along with using an encryption key of 128 bit length is introduced. Then, the encrypted image is covered by a host image using a modified version of JSteg steganography algorithm. This method could be applied to almost all image formats including TIF, BMP, GIF and JPEG. The experiment results show that our method is able to hide a desired image with high security and low visibility.

Keywords: Sinc Approximation, Image Encryption, Sincconvolution, Image Steganography, JSTEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
1949 Multichannel Image Mosaicing of Stem Cells

Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini

Abstract:

Image mosaicing techniques are usually employed to offer researchers a wider field of view of microscopic image of biological samples. a mosaic is commonly achieved using automated microscopes and often with one “color" channel, whether it refers to natural or fluorescent analysis. In this work we present a method to achieve three subsequent mosaics of the same part of a stem cell culture analyzed in phase contrast and in fluorescence, with a common non-automated inverted microscope. The mosaics obtained are then merged together to mark, in the original contrast phase images, nuclei and cytoplasm of the cells referring to a mosaic of the culture, rather than to single images. The experiments carried out prove the effectiveness of our approach with cultures of cells stained with calcein (green/cytoplasm and nuclei) and hoechst (blue/nuclei) probes.

Keywords: Microscopy, image mosaicing, fluorescence, stem cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
1948 Performance Assessment of Wet-Compression Gas Turbine Cycle with Turbine Blade Cooling

Authors: Kyoung Hoon Kim

Abstract:

Turbine blade cooling is considered as the most effective way of maintaining high operating temperature making use of the available materials, and turbine systems with wet compression have a potential for future power generation because of high efficiency and high specific power with a relatively low cost. In this paper performance analysis of wet-compression gas turbine cycle with turbine blade cooling is carried out. The wet compression process is analytically modeled based on non-equilibrium droplet evaporation. Special attention is paid for the effects of pressure ratio and water injection ratio on the important system variables such as ratio of coolant fluid flow, fuel consumption, thermal efficiency and specific power. Parametric studies show that wet compression leads to insignificant improvement in thermal efficiency but significant enhancement of specific power in gas turbine systems with turbine blade cooling.

Keywords: Water injection, wet compression, gas turbine, turbine blade cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3361
1947 Destination Port Detection for Vessels: An Analytic Tool for Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages Automatic Identification System (AIS) messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring AIS messages. Our RRo method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measures to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Frechet Distance (DFD), Dynamic Time ´ Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an f-measure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: Spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 629
1946 Probabilistic Bhattacharya Based Active Contour Model in Structure Tensor Space

Authors: Hiren Mewada, Suprava Patnaik

Abstract:

Object identification and segmentation application requires extraction of object in foreground from the background. In this paper the Bhattacharya distance based probabilistic approach is utilized with an active contour model (ACM) to segment an object from the background. In the proposed approach, the Bhattacharya histogram is calculated on non-linear structure tensor space. Based on the histogram, new formulation of active contour model is proposed to segment images. The results are tested on both color and gray images from the Berkeley image database. The experimental results show that the proposed model is applicable to both color and gray images as well as both texture images and natural images. Again in comparing to the Bhattacharya based ACM in ICA space, the proposed model is able to segment multiple object too.

Keywords: Active Contour, Bhattacharya Histogram, Structure tensor, Image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015
1945 Image Segmentation Using the K-means Algorithm for Texture Features

Authors: Wan-Ting Lin, Chuen-Horng Lin, Tsung-Ho Wu, Yung-Kuan Chan

Abstract:

This study aims to segment objects using the K-means algorithm for texture features. Firstly, the algorithm transforms color images into gray images. This paper describes a novel technique for the extraction of texture features in an image. Then, in a group of similar features, objects and backgrounds are differentiated by using the K-means algorithm. Finally, this paper proposes a new object segmentation algorithm using the morphological technique. The experiments described include the segmentation of single and multiple objects featured in this paper. The region of an object can be accurately segmented out. The results can help to perform image retrieval and analyze features of an object, as are shown in this paper.

Keywords: k-mean, multiple objects, segmentation, texturefeatures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788
1944 A New Approach to Face Recognition Using Dual Dimension Reduction

Authors: M. Almas Anjum, M. Younus Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.

Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
1943 A New Approach to Image Segmentation via Fuzzification of Rènyi Entropy of Generalized Distributions

Authors: Samy Sadek, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis, Usama Sayed

Abstract:

In this paper, we propose a novel approach for image segmentation via fuzzification of Rènyi Entropy of Generalized Distributions (REGD). The fuzzy REGD is used to precisely measure the structural information of image and to locate the optimal threshold desired by segmentation. The proposed approach draws upon the postulation that the optimal threshold concurs with maximum information content of the distribution. The contributions in the paper are as follow: Initially, the fuzzy REGD as a measure of the spatial structure of image is introduced. Then, we propose an efficient entropic segmentation approach using fuzzy REGD. However the proposed approach belongs to entropic segmentation approaches (i.e. these approaches are commonly applied to grayscale images), it is adapted to be viable for segmenting color images. Lastly, diverse experiments on real images that show the superior performance of the proposed method are carried out.

Keywords: Entropy of generalized distributions, entropy fuzzification, entropic image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3177
1942 Tomato Fruit Color Changes During Ripening On Vine

Authors: A. Radzevičius, P. Viškelis, J. Viškelis, R. Karklelienė, D. Juškevičienė

Abstract:

Tomato (Lycopersicon esculentum Mill.) hybrid 'Brooklyn' was investigated at the LRCAF Institute of Horticulture. For investigation, five green tomatoes, which were grown on vine, were selected. Color measurements were made in the greenhouse with the same selected tomato fruits (fruits were not harvested and were growing and ripening on tomato vine through all experiment) in every two days while tomatoes fruits became fully ripen. Study showed that color index L has tendency to decline and established determination coefficient (R2) was 0.9504. Also, hue angle has tendency to decline during tomato fruit ripening on vine and it’s coefficient of determination (R2) reached – 0.9739. Opposite tendency was determined with color index a*, which has tendency to increase during tomato ripening and that was expressed by polynomial trendline where coefficient of determination (R2) reached – 0.9592.

Keywords: Color, color index, ripening, tomato.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4192
1941 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2736
1940 Blind Low Frequency Watermarking Method

Authors: Dimitar Taskovski, Sofija Bogdanova, Momcilo Bogdanov

Abstract:

We present a low frequency watermarking method adaptive to image content. The image content is analyzed and properties of HVS are exploited to generate a visual mask of the same size as the approximation image. Using this mask we embed the watermark in the approximation image without degrading the image quality. Watermark detection is performed without using the original image. Experimental results show that the proposed watermarking method is robust against most common image processing operations, which can be easily implemented and usually do not degrade the image quality.

Keywords: Blind, digital watermarking, low frequency, visualmask.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
1939 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods

Authors: C. Kalamani, K. Paramasivam

Abstract:

In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.

Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
1938 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 349
1937 A Comparative Study of Image Segmentation using Edge-Based Approach

Authors: Rajiv Kumar, Arthanariee A. M.

Abstract:

Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.

Keywords: Edge Operator, Edge-based Segmentation, Image Segmentation, Matlab 10.4.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3559
1936 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: AlexNet, Deep learning, image recognition, 6D posture estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 522
1935 In-situ Quasistatic Compression and Microstructural Characterization of Aluminum Foams of Different Cell Topology

Authors: M. A. Islam, P. J. Hazell, J. P. Escobedo, M. Saadatfar

Abstract:

Metallic foams have good potential for lightweight structures for impact and blast mitigation. Therefore it is important to find out the optimized foam structure (i.e. cell size, shape, relative density, and distribution) to maximise energy absorption. In this paper, quasistatic compression and microstructural characterization of closed-cell aluminium foams of different pore size and cell distributions have been carried out. We present results for two different aluminium metal foams of density 0.49-0.51 g/cc and 0.31- 0.34 g/cc respectively that have been tested in quasi-static compression. The influence of cell geometry and cell topology on quasistatic compression behaviour has been investigated using optical microscope and computed tomography (micro-CT) analysis. It is shown that the deformation is not uniform in the structure and collapse begins at the weakest point.

Keywords: Metal foams, micro-CT, cell topology, quasistatic compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2759
1934 An Image Processing Based Approach for Assessing Wheelchair Cushions

Authors: B. Farahani, R. Fadil, A. Aboonabi, B. Hoffmann, J. Loscheider, K. Tavakolian, S. Arzanpour

Abstract:

Wheelchair users spend long hours in a sitting position, and selecting the right cushion is highly critical in preventing pressure ulcers in that demographic. Pressure Mapping Systems (PMS) are typically used in clinical settings by therapists to identify the sitting profile and pressure points in the sitting area to select the cushion that fits the best for the users. A PMS is a flexible mat composed of arrays of distributed networks of pressure sensors. The output of the PMS systems is a color-coded image that shows the intensity of the pressure concentration. Therapists use the PMS images to compare different cushions fit for each user. This process is highly subjective and requires good visual memory for the best outcome. This paper aims to develop an image processing technique to analyze the images of PMS and provide an objective measure to assess the cushions based on their pressure distribution mappings. In this paper, we first reviewed the skeletal anatomy of the human sitting area and its relation to the PMS image. This knowledge is then used to identify the important features that must be considered in image processing. We then developed an algorithm based on those features to analyze the images and rank them according to their fit to the user's needs. 

Keywords: cushion, image processing, pressure mapping system, wheelchair

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
1933 3D Oil Reservoir Visualisation Using Octree Compression Techniques Utilising Logical Grid Co-Ordinates

Authors: S. Mulholland

Abstract:

Octree compression techniques have been used for several years for compressing large three dimensional data sets into homogeneous regions. This compression technique is ideally suited to datasets which have similar values in clusters. Oil engineers represent reservoirs as a three dimensional grid where hydrocarbons occur naturally in clusters. This research looks at the efficiency of storing these grids using octree compression techniques where grid cells are broken into active and inactive regions. Initial experiments yielded high compression ratios as only active leaf nodes and their ancestor, header nodes are stored as a bitstream to file on disk. Savings in computational time and memory were possible at decompression, as only active leaf nodes are sent to the graphics card eliminating the need of reconstructing the original matrix. This results in a more compact vertex table, which can be loaded into the graphics card quicker and generating shorter refresh delay times.

Keywords: 3D visualisation, compressed vertex tables, octree compression techniques, oil reservoir grids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
1932 Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Keywords: Multiple Criteria Decision Making Analysis, MCDMA, Research Performance Evaluation, Scholar Index, h index, Science Citation Index, Science Efficiency, Cumulative Citation Index, Sciencemetrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 418
1931 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques

Authors: Hossein Nezamabadi-pour, Saeid Saryazdi

Abstract:

In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.

Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
1930 Outdoor Anomaly Detection with a Spectroscopic Line Detector

Authors: O. J. G. Somsen

Abstract:

One of the tasks of optical surveillance is to detect anomalies in large amounts of image data. However, if the size of the anomaly is very small, limited information is available to distinguish it from the surrounding environment. Spectral detection provides a useful source of additional information and may help to detect anomalies with a size of a few pixels or less. Unfortunately, spectral cameras are expensive because of the difficulty of separating two spatial in addition to one spectral dimension. We investigate the possibility of modifying a simple spectral line detector for outdoor detection. This may be especially useful if the area of interest forms a line, such as the horizon. We use a monochrome CCD that also enables detection into the near infrared. A simple camera is attached to the setup to determine which part of the environment is spectrally imaged. Our preliminary results indicate that sensitive detection of very small targets is indeed possible. Spectra could be taken from the various targets by averaging columns in the line image. By imaging a set of lines of various widths we found narrow lines that could not be seen in the color image but remained visible in the spectral line image. A simultaneous analysis of the entire spectra can produce better results than visual inspection of the line spectral image. We are presently developing calibration targets for spatial and spectral focusing and alignment with the spatial camera. This will present improved results and more use in outdoor application.

Keywords: Anomaly detection, spectroscopic line imaging, image analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
1929 Design of a DCT-based Image Compression with Efficient Enhancement Filter

Authors: Yen-Yu Chen, Pao-Ching Chu, Ya-Ling Tsai

Abstract:

The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. This work adopts DCT and modifies the SPIHT algorithm to encode DCT coefficients. The proposed algorithm also provides the enhancement function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.

Keywords: JPEG 2000, enhancement filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
1928 Quality-Controlled Compression Method using Wavelet Transform for Electrocardiogram Signals

Authors: Redha Benzid, Farid Marir, Nour-Eddine Bouguechal

Abstract:

This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.

Keywords: ECG compression, Non-uniform Max-Lloydquantizer, PRD, Quality-Controlled, Wavelet transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
1927 Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages

Authors: Chutimon Thitipornvanid, Siripun Sanguansintukul

Abstract:

Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.

Keywords: Metadata, Prediction, Multi-layer perceptron, Human facial image, Image mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1170
1926 Binary Phase-Only Filter Watermarking with Quantized Embedding

Authors: Hu Haibo, Liu Yi, He Ming

Abstract:

The binary phase-only filter digital watermarking embeds the phase information of the discrete Fourier transform of the image into the corresponding magnitudes for better image authentication. The paper proposed an approach of how to implement watermark embedding by quantizing the magnitude, with discussing how to regulate the quantization steps based on the frequencies of the magnitude coefficients of the embedded watermark, and how to embed the watermark at low frequency quantization. The theoretical analysis and simulation results show that algorithm flexibility, security, watermark imperceptibility and detection performance of the binary phase-only filter digital watermarking can be effectively improved with quantization based watermark embedding, and the robustness against JPEG compression will also be increased to some extent.

Keywords: binary phase-only filter, discrete Fourier transform, digital watermarking, image authentication, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
1925 Numerical Prediction of NOX in the Exhaust of a Compression Ignition Engine

Authors: A. A. Pawar, R. R. Kulkarni

Abstract:

For numerical prediction of the NOX in the exhaust of a compression ignition engine a model was developed by considering the parameter equivalence ratio. This model was validated by comparing the predicted results of NOX with experimental ones. The ultimate aim of the work was to access the applicability, robustness and performance of the improved NOX model against other NOX models.

Keywords: Biodiesel fueled engine, equivalence ratio, Compression ignition engine, exhausts gas temperature, NOX formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
1924 Military Fighter Aircraft Selection Using Multiplicative Multiple Criteria Decision Making Analysis Method

Authors: C. Ardil

Abstract:

Multiplicative multiple criteria decision making analysis (MCDMA) method is a systematic decision support system to aid decision makers reach appropriate decisions. The application of multiplicative MCDMA in the military aircraft selection problem is significant for proper decision making process, which is the decisive factor in minimizing expenditures and increasing defense capability and capacity. Nine military fighter aircraft alternatives were evaluated by ten decision criteria to solve the decision making problem. In this study, multiplicative MCDMA model aims to evaluate and select an appropriate military fighter aircraft for the Air Force fleet planning. The ranking results of multiplicative MCDMA model were compared with the ranking results of additive MCDMA, logarithmic MCDMA, and regrettive MCDMA models under the L2 norm data normalization technique to substantiate the robustness of the proposed method. The final ranking results indicate the military fighter aircraft Su-57 as the best available solution.

Keywords: Aircraft Selection, Military Fighter Aircraft Selection, Air Force Fleet Planning, Multiplicative MCDMA, Additive MCDMA, Logarithmic MCDMA, Regrettive MCDMA, Mean Weight, Multiple Criteria Decision Making Analysis, Sensitivity Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
1923 Prediction Modeling of Compression Properties of a Knitted Sportswear Fabric Using Response Surface Method

Authors: Jawairia Umar, Tanveer Hussain, Zulfiqar Ali, Muhammad Maqsood

Abstract:

Different knitted structures and knitted parameters play a vital role in the stretch and recovery management of compression sportswear in addition to the materials use to generate this stretch and recovery behavior of the fabric. The present work was planned to predict the different performance indicators of a compression sportswear fabric with some ground parameters i.e. base yarn stitch length (polyester as base yarn and spandex as plating yarn involve to make a compression fabric) and linear density of the spandex which is a key material of any sportswear fabric. The prediction models were generated by response surface method for performance indicators such as stretch & recovery percentage, compression generated by the garment on body, total elongation on application of high power force and load generated on certain percentage extension in fabric. Certain physical properties of the fabric were also modeled using these two parameters.

Keywords: Compression, sportswear, stretch and recovery, statistical model, kikuhime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997