Search results for: Image Texture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1676

Search results for: Image Texture

926 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162
925 Determinants of Brand Equity: Offering a Model to Chocolate Industry

Authors: Emari Hossien

Abstract:

This study examined the underlying dimensions of brand equity in the chocolate industry. For this purpose, researchers developed a model to identify which factors are influential in building brand equity. The second purpose was to assess brand loyalty and brand images mediating effect between brand attitude, brand personality, brand association with brand equity. The study employed structural equation modeling to investigate the causal relationships between the dimensions of brand equity and brand equity itself. It specifically measured the way in which consumers’ perceptions of the dimensions of brand equity affected the overall brand equity evaluations. Data were collected from a sample of consumers of chocolate industry in Iran. The results of this empirical study indicate that brand loyalty and brand image are important components of brand equity in this industry. Moreover, the role of brand loyalty and brand image as mediating factors in the intention of brand equity are supported. The principal contribution of the present research is that it provides empirical evidence of the multidimensionality of consumer based brand equity, supporting Aaker´s and Keller´s conceptualization of brand equity. The present research also enriched brand equity building by incorporating the brand personality and brand image, as recommended by previous researchers. Moreover, creating the brand equity index in chocolate industry of Iran particularly is novel.

Keywords: brand equity, brand personality, structural equationmodeling, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3614
924 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: Deep-learning, image classification, image identification, industrial engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758
923 Depth Estimation in DNN Using Stereo Thermal Image Pairs

Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge

Abstract:

Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.

Keywords: thermal stereo matching, depth estimation, deep neural networks, CNN

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
922 Performance of Compound Enhancement Algorithms on Dental Radiograph Images

Authors: S.A.Ahmad, M.N.Taib, N.E.A.Khalid, R.Ahmad, H.Taib

Abstract:

The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.

Keywords: intra-oral dental radiograph, histogram equalization, sharpening, CLAHE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
921 Visual Search Based Indoor Localization in Low Light via RGB-D Camera

Authors: Yali Zheng, Peipei Luo, Shinan Chen, Jiasheng Hao, Hong Cheng

Abstract:

Most of traditional visual indoor navigation algorithms and methods only consider the localization in ordinary daytime, while we focus on the indoor re-localization in low light in the paper. As RGB images are degraded in low light, less discriminative infrared and depth image pairs are taken, as the input, by RGB-D cameras, the most similar candidates, as the output, are searched from databases which is built in the bag-of-word framework. Epipolar constraints can be used to relocalize the query infrared and depth image sequence. We evaluate our method in two datasets captured by Kinect2. The results demonstrate very promising re-localization results for indoor navigation system in low light environments.

Keywords: Indoor navigation, low light, RGB-D camera, vision based.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
920 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images

Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn

Abstract:

The detection and segmentation of mitochondria from fluorescence microscopy is crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. Although there exists a number of open-source software tools and artificial intelligence (AI) methods designed for analyzing mitochondrial images, the availability of only a few combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compactibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source Python and OpenCV library, the algorithms are implemented in three stages: pre-processing; image binarization; and coarse-to-fine segmentation. The proposed model is validated using the fluorescence mitochondrial dataset. Ground truth labels generated using Labkit were also used to evaluate the performance of our detection and segmentation model using precision, recall and rand index. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks concludes the paper.

Keywords: 2D, Binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467
919 A Comparative Study of Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV) for Airflow Measurement

Authors: Sijie Fu, Pascal-Henry Biwolé, Christian Mathis

Abstract:

Among modern airflow measurement methods, Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV), as visualized and non-instructive measurement techniques, are playing more important role. This paper conducts a comparative experimental study for airflow measurement employing both techniques with the same condition. Velocity vector fields, velocity contour fields, voticity profiles and turbulence profiles are selected as the comparison indexes. The results show that the performance of both PIV and PTV techniques for airflow measurement is satisfied, but some differences between the both techniques are existed, it suggests that selecting the measurement technique should be based on a comprehensive consideration.

Keywords: PIV, PTV, airflow measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4307
918 Robust Face Recognition Using Eigen Faces and Karhunen-Loeve Algorithm

Authors: Parvinder S. Sandhu, Iqbaldeep Kaur, Amit Verma, Prateek Gupta

Abstract:

The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for face recognition. The designed program works in a manner where a unique identification number is given to each face under trial. These faces are kept in a database from where any particular face can be matched and found out of the available test faces. The Karhunen –Loeve Algorithm has been implemented to find out the appropriate right face (with same features) with respect to given input image as test data image having unique identification number. The procedure involves usage of Eigen faces for the recognition of faces.

Keywords: Eigen Faces, Karhunen-Loeve Algorithm, FaceRecognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
917 A Novel Deinterlacing Algorithm Based on Adaptive Polynomial Interpolation

Authors: Seung-Won Jung, Hye-Soo Kim, Le Thanh Ha, Seung-Jin Baek, Sung-Jea Ko

Abstract:

In this paper, a novel deinterlacing algorithm is proposed. The proposed algorithm approximates the distribution of the luminance into a polynomial function. Instead of using one polynomial function for all pixels, different polynomial functions are used for the uniform, texture, and directional edge regions. The function coefficients for each region are computed by matrix multiplications. Experimental results demonstrate that the proposed method performs better than the conventional algorithms.

Keywords: Deinterlacing, polynomial interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
916 Non-destructive Watermelon Ripeness Determination Using Image Processing and Artificial Neural Network (ANN)

Authors: Shah Rizam M. S. B., Farah Yasmin A.R., Ahmad Ihsan M. Y., Shazana K.

Abstract:

Agriculture products are being more demanding in market today. To increase its productivity, automation to produce these products will be very helpful. The purpose of this work is to measure and determine the ripeness and quality of watermelon. The textures on watermelon skin will be captured using digital camera. These images will be filtered using image processing technique. All these information gathered will be trained using ANN to determine the watermelon ripeness accuracy. Initial results showed that the best model has produced percentage accuracy of 86.51%, when measured at 32 hidden units with a balanced percentage rate of training dataset.

Keywords: Artificial Neural Network (ANN), Digital ImageProcessing, YCbCr Colour Space, Watermelon Ripeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2952
915 Fast Algorithm of Infrared Point Target Detection in Fluctuant Background

Authors: Yang Weiping, Zhang Zhilong, Li Jicheng, Chen Zengping, He Jun

Abstract:

The background estimation approach using a small window median filter is presented on the bases of analyzing IR point target, noise and clutter model. After simplifying the two-dimensional filter, a simple method of adopting one-dimensional median filter is illustrated to make estimations of background according to the characteristics of IR scanning system. The adaptive threshold is used to segment canceled image in the background. Experimental results show that the algorithm achieved good performance and satisfy the requirement of big size image-s real-time processing.

Keywords: Point target, background estimation, median filter, adaptive threshold, target detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
914 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: Cloud storage, decision trees, diagnostic image, search, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
913 Detection of Leaks in Water Mains Using Ground Penetrating Radar

Authors: Alaa Al Hawari, Mohammad Khader, Tarek Zayed, Osama Moselhi

Abstract:

Ground Penetrating Radar (GPR) is one of the most effective electromagnetic techniques for non-destructive non-invasive subsurface features investigation. Water leak from pipelines is the most common undesirable reason of potable water losses. Rapid detection of such losses is going to enhance the use of the Water Distribution Networks (WDN) and decrease threatens associated with water mains leaks. In this study, GPR approach was developed to detect leaks by implementing an appropriate imaging analyzing strategy based on image refinement, reflection polarity and reflection amplitude that would ease the process of interpreting the collected raw radargram image.

Keywords: Water Networks, Leakage, Water pipelines, Ground Penetrating Radar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2315
912 A Self Configuring System for Object Recognition in Color Images

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a highly user-friendly tool.

Keywords: Automatic object recognition, clustering, content based image retrieval system, image segmentation, region adjacency graph, region grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
911 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky

Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio

Abstract:

This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.

Keywords: Contour orientation histogram, meteors, night sky, RSC neural classifier, stars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407
910 Robust Digital Cinema Watermarking

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
909 Edge Detection Algorithm Based on Wavelet De-nosing Applied tothe X-ray Image Enhancement of the Electric Equipment

Authors: Fei Xue, Hong Yu, Da-da Wang, Wei Zhang, Rong-min Zou, Xiao-lanCai

Abstract:

The X-ray technology has been used in non-destructive evaluation in the Power System, in which a visual non-destructive inspection method for the electrical equipment is provided. However, lots of noise is existed in the images that are got from the X-ray digital images equipment. Therefore, the auto defect detection which based on these images will be very difficult to proceed. A theory on X-ray image de-noising algorithm based on wavelet transform is proposed in this paper. Then the edge detection algorithm is used so that the defect can be pushed out. The result of experiment shows that the method which utilized by this paper is very useful for de-noising on the X-ray images.

Keywords: de-noising, edge detection, wavelet transform, X-ray

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
908 Using Secure-Image Mechanism to Protect Mobile Agent Against Malicious Hosts

Authors: Tarig Mohamed Ahmed

Abstract:

The usage of internet is rapidly increasing and the usage of mobile agent technology in internet environment has a great demand. The security issue one of main obstacles that restrict the mobile agent technology to spread. This paper proposes Secure-Image Mechanism (SIM) as a new mechanism to protect mobile agents against malicious hosts. . SIM aims to protect mobile agent by using the symmetric encryption and hash function in cryptography science. This mechanism can prevent the eavesdropping and alteration attacks. It assists the mobile agents to continue their journey normally incase attacks occurred.

Keywords: Agent protection, cryptography, mobile agent security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
907 Urban Land Cover Change of Olomouc City Using LANDSAT Images

Authors: Miloš Marjanović, Jaroslav Burian, Ja kub Miřijovský, Jan Harbula

Abstract:

This paper regards the phenomena of intensive suburbanization and urbanization in Olomouc city and in Olomouc region in general for the period of 1986–2009. A Remote Sensing approach that involves tracking of changes in Land Cover units is proposed to quantify the urbanization state and trends in temporal and spatial aspects. It actually consisted of two approaches, Experiment 1 and Experiment 2 which implied two different image classification solutions in order to provide Land Cover maps for each 1986–2009 time split available in the Landsat image set. Experiment 1 dealt with the unsupervised classification, while Experiment 2 involved semi- supervised classification, using a combination of object-based and pixel-based classifiers. The resulting Land Cover maps were subsequently quantified for the proportion of urban area unit and its trend through time, and also for the urban area unit stability, yielding the relation of spatial and temporal development of the urban area unit. Some outcomes seem promising but there is indisputably room for improvements of source data and also processing and filtering.

Keywords: Change detection, image classification, land cover, Landsat images, Olomouc city, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
906 Efficient Lossless Compression of Weather Radar Data

Authors: Wei-hua Ai, Wei Yan, Xiang Li

Abstract:

Data compression is used operationally to reduce bandwidth and storage requirements. An efficient method for achieving lossless weather radar data compression is presented. The characteristics of the data are taken into account and the optical linear prediction is used for the PPI images in the weather radar data in the proposed method. The next PPI image is identical to the current one and a dramatic reduction in source entropy is achieved by using the prediction algorithm. Some lossless compression methods are used to compress the predicted data. Experimental results show that for the weather radar data, the method proposed in this paper outperforms the other methods.

Keywords: Lossless compression, weather radar data, optical linear prediction, PPI image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
905 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
904 Optimized Vector Quantization for Bayer Color Filter Array

Authors: M. Lakshmi, J. Senthil Kumar

Abstract:

Digital cameras to reduce cost, use an image sensor to capture color images. Color Filter Array (CFA) in digital cameras permits only one of the three primary (red-green-blue) colors to be sensed in a pixel and interpolates the two missing components through a method named demosaicking. Captured data is interpolated into a full color image and compressed in applications. Color interpolation before compression leads to data redundancy. This paper proposes a new Vector Quantization (VQ) technique to construct a VQ codebook with Differential Evolution (DE) Algorithm. The new technique is compared to conventional Linde- Buzo-Gray (LBG) method.

Keywords: Color Filter Array (CFA), Biorthogonal Wavelet, Vector Quantization (VQ), Differential Evolution (DE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
903 The Effect of Forest Fires on Physical Properties and Magnetic Susceptibility of Semi-Arid Soils in North-Eastern, Libya

Authors: G. S. Eldiabani, W. H. G. Hale, C. P. Heron

Abstract:

Forest areas are particularly susceptible to fires, which are often manmade. One of the most fire affected forest regions in the world is the Mediterranean. Libya, in the Mediterranean region, has soils that are considered to be arid except in a small area called Aljabal Alakhdar (Green mountain), which is the geographic area covered by this study. Like other forests in the Mediterranean it has suffered extreme degradation. This is mainly due to people removing fire wood, or sometimes converting forested areas to agricultural use, as well as fires which may alter several soil chemical and physical properties. The purpose of this study was to evaluate the effects of fires on the physical properties of soil of Aljabal Alakhdar forest in the north-east of Libya. The physical properties of soil following fire in two geographic areas have been determined, with those subjected to the fire compared to those in adjacent unburned areas in one coastal and one mountain site. Physical properties studied were: soil particle size (soil texture), soil water content, soil porosity and soil particle density. For the first time in Libyan soils, the effect of burning on the magnetic susceptibility properties of soils was also tested. The results showed that the soils in both study sites, irrespective of burning or depth fell into the category of a silt loam texture, low water content, homogeneity of porosity of the soil profiles, relatively high soil particle density values and there is a much greater value of the soil magnetic susceptibility in the top layer from both sites except for the soil water content and magnetic susceptibility, fire has not had a clear effect on the soils’ physical properties.

Keywords: Aljabal Alakhdar, the coastal site, the mountain site, fire effect, soil particle size, soil water content, soil porosity, soil particle density, soil magnetic susceptibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2650
902 Face Authentication for Access Control based on SVM using Class Characteristics

Authors: SeHun Lim, Sanghoon Kim, Sun-Tae Chung, Seongwon Cho

Abstract:

Face authentication for access control is a face membership authentication which passes the person of the incoming face if he turns out to be one of an enrolled person based on face recognition or rejects if not. Face membership authentication belongs to the two class classification problem where SVM(Support Vector Machine) has been successfully applied and shows better performance compared to the conventional threshold-based classification. However, most of previous SVMs have been trained using image feature vectors extracted from face images of each class member(enrolled class/unenrolled class) so that they are not robust to variations in illuminations, poses, and facial expressions and much affected by changes in member configuration of the enrolled class In this paper, we propose an effective face membership authentication method based on SVM using class discriminating features which represent an incoming face image-s associability with each class distinctively. These class discriminating features are weakly related with image features so that they are less affected by variations in illuminations, poses and facial expression. Through experiments, it is shown that the proposed face membership authentication method performs better than the threshold rule-based or the conventional SVM-based authentication methods and is relatively less affected by changes in member size and membership.

Keywords: Face Authentication, Access control, member ship authentication, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
901 Efficient Method for ECG Compression Using Two Dimensional Multiwavelet Transform

Authors: Morteza Moazami-Goudarzi, Mohammad H. Moradi, Ali Taheri

Abstract:

In this paper we introduce an effective ECG compression algorithm based on two dimensional multiwavelet transform. Multiwavelets offer simultaneous orthogonality, symmetry and short support, which is not possible with scalar two-channel wavelet systems. These features are known to be important in signal processing. Thus multiwavelet offers the possibility of superior performance for image processing applications. The SPIHT algorithm has achieved notable success in still image coding. We suggested applying SPIHT algorithm to 2-D multiwavelet transform of2-D arranged ECG signals. Experiments on selected records of ECG from MIT-BIH arrhythmia database revealed that the proposed algorithm is significantly more efficient in comparison with previously proposed ECG compression schemes.

Keywords: ECG signal compression, multi-rateprocessing, 2-D Multiwavelet, Prefiltering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
900 Extended Set of DCT-TPLBP and DCT-FPLBP for Face Recognition

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we describe an application for face recognition. Many studies have used local descriptors to characterize a face, the performance of these local descriptors remain low by global descriptors (working on the entire image). The application of local descriptors (cutting image into blocks) must be able to store both the advantages of global and local methods in the Discrete Cosine Transform (DCT) domain. This system uses neural network techniques. The letter method provides a good compromise between the two approaches in terms of simplifying of calculation and classifying performance. Finally, we compare our results with those obtained from other local and global conventional approaches.

Keywords: Face detection, face recognition, discrete cosine transform (DCT), FPLBP, TPLBP, NN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
899 Comparative Analysis of Classical and Parallel Inpainting Algorithms Based on Affine Combinations of Projections on Convex Sets

Authors: Irina Maria Artinescu, Costin Radu Boldea, Eduard-Ionut Matei

Abstract:

The paper is a comparative study of two classical vari-ants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time, in classical CPU and, alternativaly, in parallel GPU implementation.

Keywords: convex feasibility problem, convergence analysis, ınpainting, parallel projection methods

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 448
898 Apply Super-SVA to SAR Imaging with Both Aperture Gaps and Bandwidth Gaps

Authors: Wenshuai Zhai, Yunhua Zhang

Abstract:

Synthetic aperture radar (SAR) imaging usually requires echo data collected continuously pulse by pulse with certain bandwidth. However in real situation, data collection or part of signal spectrum can be interrupted due to various reasons, i.e. there will be gaps in spatial spectrum. In this case we need to find ways to fill out the resulted gaps and get image with defined resolution. In this paper we introduce our work on how to apply iterative spatially variant apodization (Super-SVA) technique to extrapolate the spatial spectrum in both azimuthal and range directions so as to fill out the gaps and get correct radar image.

Keywords: SAR imaging, Sparse aperture, Stepped frequencychirp signal, high resolution, Super-SVA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
897 Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix-to-Pix GAN

Authors: Muhammad Atif, Cang Yan

Abstract:

The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on Convolutional Neural Networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an Autoencoders-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the Pix-to-Pix GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies.

Keywords: Low light image enhancement, deep learning, convolutional neural network, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33