Search results for: increasing contrast
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2390

Search results for: increasing contrast

2360 On-line Image Mosaicing of Live Stem Cells

Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini

Abstract:

Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.

Keywords: Microscopy, image mosaicing, stem cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
2359 On Face Recognition using Gabor Filters

Authors: Al-Amin Bhuiyan, Chang Hong Liu

Abstract:

Gabor-based face representation has achieved enormous success in face recognition. This paper addresses a novel algorithm for face recognition using neural networks trained by Gabor features. The system is commenced on convolving a face image with a series of Gabor filter coefficients at different scales and orientations. Two novel contributions of this paper are: scaling of rms contrast and introduction of fuzzily skewed filter. The neural network employed for face recognition is based on the multilayer perceptron (MLP) architecture with backpropagation algorithm and incorporates the convolution filter response of Gabor jet. The effectiveness of the algorithm has been justified over a face database with images captured at different illumination conditions.

Keywords: Fuzzily skewed filter, Gabor filter, rms contrast, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3053
2358 Hand Vein Image Enhancement With Radon Like Features Descriptor

Authors: Randa Boukhris Trabelsi, Alima Damak Masmoudi, Dorra Sellami Masmoudi

Abstract:

Nowadays, hand vein recognition has attracted more attentions in identification biometrics systems. Generally, hand vein image is acquired with low contrast and irregular illumination. Accordingly, if you have a good preprocessing of hand vein image, we can easy extracted the feature extraction even with simple binarization. In this paper, a proposed approach is processed to improve the quality of hand vein image. First, a brief survey on existing methods of enhancement is investigated. Then a Radon Like features method is applied to preprocessing hand vein image. Finally, experiments results show that the proposed method give the better effective and reliable in improving hand vein images.

Keywords: Hand Vein, Enhancement, Contrast, RLF, SDME

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
2357 Detecting Rat’s Kidney Inflammation Using Real Time Photoacoustic Tomography

Authors: M. Y. Lee, D. H. Shin, S. H. Park, W.C. Ham, S.K. Ko, C. G. Song

Abstract:

Photoacoustic Tomography (PAT) is a promising medical imaging modality that combines optical imaging contrast with the spatial resolution of ultrasound imaging. It can also distinguish the changes in biological features. But, real-time PAT system should be confirmed due to photoacoustic effect for tissue. Thus, we have developed a real-time PAT system using a custom-developed data acquisition board and ultrasound linear probe. To evaluate performance of our system, phantom test was performed. As a result of those experiments, the system showed satisfactory performance and its usefulness has been confirmed. We monitored the degradation of inflammation which induced on the rat’s kidney using real-time PAT.

Keywords: Photoacoustic tomography, inflammation detection, rat, kidney, contrast agent, ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1327
2356 Detecting Subsurface Circular Objects from Low Contrast Noisy Images: Applications in Microscope Image Enhancement

Authors: Soham De, Nupur Biswas, Abhijit Sanyal, Pulak Ray, Alokmay Datta

Abstract:

Particle detection in very noisy and low contrast images is an active field of research in image processing. In this article, a method is proposed for the efficient detection and sizing of subsurface spherical particles, which is used for the processing of softly fused Au nanoparticles. Transmission Electron Microscopy is used for imaging the nanoparticles, and the proposed algorithm has been tested with the two-dimensional projected TEM images obtained. Results are compared with the data obtained by transmission optical spectroscopy, as well as with conventional circular object detection algorithms.

Keywords: Transmission Electron Microscopy, Circular Hough Transform, Au Nanoparticles, Median Filter, Laplacian Sharpening Filter, Canny Edge Detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2542
2355 A Perceptual Image Coding method of High Compression Rate

Authors: Fahmi Kammoun, Mohamed Salim Bouhlel

Abstract:

In the framework of the image compression by Wavelet Transforms, we propose a perceptual method by incorporating Human Visual System (HVS) characteristics in the quantization stage. Indeed, human eyes haven-t an equal sensitivity across the frequency bandwidth. Therefore, the clarity of the reconstructed images can be improved by weighting the quantization according to the Contrast Sensitivity Function (CSF). The visual artifact at low bit rate is minimized. To evaluate our method, we use the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria witch takes into account visual criteria. The experimental results illustrate that our technique shows improvement on image quality at the same compression ratio.

Keywords: Contrast Sensitivity Function, Human Visual System, Image compression, Wavelet transforms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
2354 Blind Source Separation Using Modified Gaussian FastICA

Authors: V. K. Ananthashayana, Jyothirmayi M.

Abstract:

This paper addresses the problem of source separation in images. We propose a FastICA algorithm employing a modified Gaussian contrast function for the Blind Source Separation. Experimental result shows that the proposed Modified Gaussian FastICA is effectively used for Blind Source Separation to obtain better quality images. In this paper, a comparative study has been made with other popular existing algorithms. The peak signal to noise ratio (PSNR) and improved signal to noise ratio (ISNR) are used as metrics for evaluating the quality of images. The ICA metric Amari error is also used to measure the quality of separation.

Keywords: Amari error, Blind Source Separation, Contrast function, Gaussian function, Independent Component Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
2353 Approximation of PE-MOCVD to ALD for TiN Concerning Resistivity and Chemical Composition

Authors: D. Geringswald, B. Hintze

Abstract:

The miniaturization of circuits is advancing. During chip manufacturing, structures are filled for example by metal organic chemical vapor deposition (MOCVD). Since this process reaches its limits in case of very high aspect ratios, the use of alternatives such as the atomic layer deposition (ALD) is possible, requiring the extension of existing coating systems. However, it is an unsolved question to what extent MOCVD can achieve results similar as an ALD process. In this context, this work addresses the characterization of a metal organic vapor deposition of titanium nitride. Based on the current state of the art, the film properties coating thickness, sheet resistance, resistivity, stress and chemical composition are considered. The used setting parameters are temperature, plasma gas ratio, plasma power, plasma treatment time, deposition time, deposition pressure, number of cycles and TDMAT flow. The derived process instructions for unstructured wafers and inside a structure with high aspect ratio include lowering the process temperature and increasing the number of cycles, the deposition and the plasma treatment time as well as the plasma gas ratio of hydrogen to nitrogen (H2:N2). In contrast to the current process configuration, the deposited titanium nitride (TiN) layer is more uniform inside the entire test structure. Consequently, this paper provides approaches to employ the MOCVD for structures with increasing aspect ratios.

Keywords: ALD, high aspect ratio, PE-MOCVD, TiN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
2352 Sperm Identification Using Elliptic Model and Tail Detection

Authors: Vahid Reza Nafisi, Mohammad Hasan Moradi, Mohammad Hosain Nasr-Esfahani

Abstract:

The conventional assessment of human semen is a highly subjective assessment, with considerable intra- and interlaboratory variability. Computer-Assisted Sperm Analysis (CASA) systems provide a rapid and automated assessment of the sperm characteristics, together with improved standardization and quality control. However, the outcome of CASA systems is sensitive to the method of experimentation. While conventional CASA systems use digital microscopes with phase-contrast accessories, producing higher contrast images, we have used raw semen samples (no staining materials) and a regular light microscope, with a digital camera directly attached to its eyepiece, to insure cost benefits and simple assembling of the system. However, since the accurate finding of sperms in the semen image is the first step in the examination and analysis of the semen, any error in this step can affect the outcome of the analysis. This article introduces and explains an algorithm for finding sperms in low contrast images: First, an image enhancement algorithm is applied to remove extra particles from the image. Then, the foreground particles (including sperms and round cells) are segmented form the background. Finally, based on certain features and criteria, sperms are separated from other cells.

Keywords: Computer-Assisted Sperm Analysis (CASA), Sperm identification, Tail detection, Elliptic shape model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
2351 A Low Complexity Frequency Offset Estimation for MB-OFDM based UWB Systems

Authors: Wang Xue, Liu Dan, Liu Ying, Wang Molin, Qian Zhihong

Abstract:

A low-complexity, high-accurate frequency offset estimation for multi-band orthogonal frequency division multiplexing (MB-OFDM) based ultra-wide band systems is presented regarding different carrier frequency offsets, different channel frequency responses, different preamble patterns in different bands. Utilizing a half-cycle Constant Amplitude Zero Auto Correlation (CAZAC) sequence as the preamble sequence, the estimator with a semi-cross contrast scheme between two successive OFDM symbols is proposed. The CRLB and complexity of the proposed algorithm are derived. Compared to the reference estimators, the proposed method achieves significantly less complexity (about 50%) for all preamble patterns of the MB-OFDM systems. The CRLBs turn out to be of well performance.

Keywords: CAZAC, Frequency Offset, Semi-cross Contrast, MB-OFDM, UWB

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
2350 Ultrasonic Echo Image Adaptive Watermarking Using the Just-Noticeable Difference Estimation

Authors: Amnach Khawne, Kazuhiko Hamamoto, Orachat Chitsobhuk

Abstract:

Most of the image watermarking methods, using the properties of the human visual system (HVS), have been proposed in literature. The component of the visual threshold is usually related to either the spatial contrast sensitivity function (CSF) or the visual masking. Especially on the contrast masking, most methods have not mention to the effect near to the edge region. Since the HVS is sensitive what happens on the edge area. This paper proposes ultrasound image watermarking using the visual threshold corresponding to the HVS in which the coefficients in a DCT-block have been classified based on the texture, edge, and plain area. This classification method enables not only useful for imperceptibility when the watermark is insert into an image but also achievable a robustness of watermark detection. A comparison of the proposed method with other methods has been carried out which shown that the proposed method robusts to blockwise memoryless manipulations, and also robust against noise addition.

Keywords: Medical image watermarking, Human Visual System, Image Adaptive Watermark

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
2349 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
2348 Multichannel Image Mosaicing of Stem Cells

Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini

Abstract:

Image mosaicing techniques are usually employed to offer researchers a wider field of view of microscopic image of biological samples. a mosaic is commonly achieved using automated microscopes and often with one “color" channel, whether it refers to natural or fluorescent analysis. In this work we present a method to achieve three subsequent mosaics of the same part of a stem cell culture analyzed in phase contrast and in fluorescence, with a common non-automated inverted microscope. The mosaics obtained are then merged together to mark, in the original contrast phase images, nuclei and cytoplasm of the cells referring to a mosaic of the culture, rather than to single images. The experiments carried out prove the effectiveness of our approach with cultures of cells stained with calcein (green/cytoplasm and nuclei) and hoechst (blue/nuclei) probes.

Keywords: Microscopy, image mosaicing, fluorescence, stem cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
2347 Conditions of the Anaerobic Digestion of Biomass

Authors: N. Boontian

Abstract:

Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solidstate anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.

Keywords: Anaerobic digestion, Lignocellulosic biomass, Methane production, Optimization, Pretreatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4187
2346 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
2345 Contrast-Enhanced Multispectal Upconversion Fluorescence Analysis for High-Resolution in-vivo Deep Tissue Imaging

Authors: Lijiang Wang, Wei Wang, Yuhong Xu

Abstract:

Lanthanide-doped upconversion nanoparticles which can convert near-infrared lights to visible lights have attracted growing interest because of their great potentials in fluorescence imaging. Upconversion fluorescence imaging technique with excitation in the near-infrared (NIR) region has been used for imaging of biological cells and tissues. However, improving the detection sensitivity and decreasing the absorption and scattering in biological tissues are as yet unresolved problems. In this present study, a novel NIR-reflected multispectral imaging system was developed for upconversion fluorescent imaging in small animals. Based on this system, we have obtained the high contrast images without the autofluorescence when biocompatible UCPs were injected near the body surface or deeply into the tissue. Furthermore, we have extracted respective spectra of the upconversion fluorescence and relatively quantify the fluorescence intensity with the multispectral analysis. To our knowledge, this is the first time to analyze and quantify the upconversion fluorescence in the small animal imaging.

Keywords: Multispectral imaging, near-infrared, upconversion fluorescence imaging, upconversion nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
2344 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho

Abstract:

Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.

Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
2343 Preparation of CuAlO2 Thin Films on Si or Sapphire Substrate by Sol-Gel Method Using Metal Acetate or Nitrate

Authors: Takashi Ehara, Takayoshi Nakanishi, Kohei Sasaki, Marina Abe, Hiroshi Abe, Kiyoaki Abe, Ryo Iizaka, Takuya Sato

Abstract:

CuAlO2 thin films are prepared on Si or sapphire substrate by sol-gel method using two kinds of sols. One is combination of Cu acetate and Al acetate basic, and the other is Cu nitrate and Al nitrate. In the case of acetate sol, XRD peaks of CuAlO2 observed at annealing temperature of 800-950 ºC on both Si and sapphire substrates. In contrast, in the case of the films prepared using nitrate on Si substrate, XRD peaks of CuAlO2 have been observed only at the annealing temperature of 800-850 ºC. At annealing temperature of 850ºC, peaks of other species have been observed beside the CuAlO2 peaks, then, the CuAlO2 peaks disappeared at annealing temperature of 900 °C with increasing in intensity of the other peaks. Intensity of the other peaks decreased at annealing temperature of 950 ºC with appearance of broad SiO2 peak. In the present, we ascribe these peaks as metal silicide.

Keywords: CuAlO2, silicide, thin films, transparent conducting oxide, sol-gel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
2342 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method

Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri

Abstract:

Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.

Keywords: Unsharp masking, blur image, sub-region gradient, image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
2341 Enhancement of m-FISH Images using Spectral Unmixing

Authors: Martin De Biasio, Raimund Leitner, Franz G. Wuertz, Sergey Verzakov, Pierre J. Elbischger

Abstract:

Breast carcinoma is the most common form of cancer in women. Multicolour fluorescent in-situ hybridisation (m-FISH) is a common method for staging breast carcinoma. The interpretation of m-FISH images is complicated due to two effects: (i) Spectral overlap in the emission spectra of fluorochrome marked DNA probes and (ii) tissue autofluorescence. In this paper hyper-spectral images of m-FISH samples are used and spectral unmixing is applied to produce false colour images with higher contrast and better information content than standard RGB images. The spectral unmixing is realised by combinations of: Orthogonal Projection Analysis (OPA), Alterating Least Squares (ALS), Simple-to-use Interactive Self-Modeling Mixture Analysis (SIMPLISMA) and VARIMAX. These are applied on the data to reduce tissue autofluorescence and resolve the spectral overlap in the emission spectra. The results show that spectral unmixing methods reduce the intensity caused by tissue autofluorescence by up to 78% and enhance image contrast by algorithmically reducing the overlap of the emission spectra.

Keywords: breast carcinoma, hyperspectral imaging, m-FISH, spectral unmixing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
2340 The Investigation of Precipitation Conditions of Chevreul’s Salt

Authors: Turan Çalban, Fatih Sevim, Oral Laçin

Abstract:

In this study, the precipitation conditions of Chevreul’s salt were evaluated. The structure of Chevreul’s salt was examined by considering the previous studies. Thermodynamically, the most important precipitation parameters were pH, temperature, and sulphite-copper(II) ratio. The amount of Chevreul’s salt increased with increasing the temperature and sulphite-copper(II) ratio at the certain range, while it increased with decreasing the pH value at the chosen range. The best solution medium for recovery of Chevreul’s salt is sulphur dioxide gas-water system. Moreover, the soluble sulphite salts are used as efficient precipitating reagents. Chevreul’s salt is generally used to produce the highly pure copper powders from synthetic copper sulphate solutions and impure leach solutions. When the pH of the initial ammoniacal solution is greater than 8.5, ammonia in the medium is not free, and Chevreul’s salt from solution does not precipitate. In contrast, copper ammonium sulphide is precipitated. The pH of the initial solution containing ammonia for precipitating of Chevreul’s salt must be less than 8.5.

Keywords: Chevreul’s salt, copper sulphites, mixed-valence sulphite compounds, precipitating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
2339 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: Text mining, topic extraction, independent, incremental, independent component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009
2338 Optimization of Fin Type and Fin per Inch on Heat Transfer and Pressure Drop of an Air Cooler

Authors: A. Falavand Jozaei, A. Ghafouri

Abstract:

Operation enhancement in an air cooler depends on rate of heat transfer, and pressure drop. In this paper for a given heat duty, study of the effects of FPI (Fin Per Inch) and fin type (circular and hexagonal fins) on heat transfer, and pressure drop in an air cooler in Iran, Arvand petrochemical. A program in EES (Engineering Equations Solver) software moreover, Aspen B-JAC and HTFS+ softwares are used for this purpose to solve governing equations. At first the simulated results obtained from this program is compared to the experimental data for two cases of FPI. The effects of FPI from 3 to 15 over heat transfer (Q) to pressure drop ratio (Q/Δp ratio). This ratio is one of the main parameters in design, and simulation heat exchangers. The results show that heat transfer (Q) and pressure drop increase with increasing FPI steadily, and the Q/Δp ratio increases to FPI=12 and then decreased gradually to FPI=15, and Q/Δp ratio is maximum at FPI=12. The FPI value selection between 8 and 12 obtained as a result to optimum heat transfer to pressure drop ratio. Also by contrast, between circular and hexagonal fins results, the Q/Δp ratio of hexagonal fins more than Q/Δp ratio of circular fins for FPI between 8 and 12 (optimum FPI)

Keywords: Air cooler, circular and hexagonal fins, fin per inch, heat transfer and pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4642
2337 Performance of Compound Enhancement Algorithms on Dental Radiograph Images

Authors: S.A.Ahmad, M.N.Taib, N.E.A.Khalid, R.Ahmad, H.Taib

Abstract:

The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.

Keywords: intra-oral dental radiograph, histogram equalization, sharpening, CLAHE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
2336 High Gain Broadband Plasmonic Slot Nano-Antenna

Authors: H. S. Haroyan, V. R. Tadevosyan

Abstract:

High gain broadband plasmonic slot nano-antenna has been considered. The theory of plasmonic slot nano-antenna (PSNA) has been developed. The analytical model takes into account also the electrical field inside the metal due to imperfectness of metal in optical range, as well as numerical investigation based on finite element method (FEM) has been realized. It should be mentioned that Yagi-Uda configuration improves directivity in the plane of structure. In contrast, in this paper the possibility of directivity improvement of proposed PSNA in perpendicular plane of structure by using reflection metallic surface placed under the slot in fixed distance has been demonstrated. It is well known that a directivity improvement brings to the antenna gain increasing. This method of diagram improving is also well known from RF antenna design theory. Moreover the improvement of directivity in the perpendicular plane gives more flexibility in such application as improving the light and atom, ion, molecule interactions by using such type of plasmonic slot antenna. By the analogy of dipole type optical antennas the widening of working wavelengths has been realized by using bowtie geometry of slots, which made the antenna broadband.

Keywords: Broadband antenna, high gain, slot nano-antenna, plasmonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
2335 Enhanced Multi-Intensity Analysis in Multi-Scenery Classification-Based Macro and Micro Elements

Authors: R. Bremananth

Abstract:

Several computationally challenging issues are encountered while classifying complex natural scenes. In this paper, we address the problems that are encountered in rotation invariance with multi-intensity analysis for multi-scene overlapping. In the present literature, various algorithms proposed techniques for multi-intensity analysis, but there are several restrictions in these algorithms while deploying them in multi-scene overlapping classifications. In order to resolve the problem of multi-scenery overlapping classifications, we present a framework that is based on macro and micro basis functions. This algorithm conquers the minimum classification false alarm while pigeonholing multi-scene overlapping. Furthermore, a quadrangle multi-intensity decay is invoked. Several parameters are utilized to analyze invariance for multi-scenery classifications such as rotation, classification, correlation, contrast, homogeneity, and energy. Benchmark datasets were collected for complex natural scenes and experimented for the framework. The results depict that the framework achieves a significant improvement on gray-level matrix of co-occurrence features for overlapping in diverse degree of orientations while pigeonholing multi-scene overlapping.

Keywords: Automatic classification, contrast, homogeneity, invariant analysis, multi-scene analysis, overlapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
2334 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images

Authors: Shahriar Farzam, Maryam Rastgarpour

Abstract:

Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).

Keywords: Curvelet transform, image enhancement, CBCT, image denoising.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214
2333 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
2332 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.

Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
2331 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460