Search results for: Low light image enhancement
2321 A New Approach to Image Segmentation via Fuzzification of Rènyi Entropy of Generalized Distributions
Authors: Samy Sadek, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis, Usama Sayed
Abstract:
In this paper, we propose a novel approach for image segmentation via fuzzification of Rènyi Entropy of Generalized Distributions (REGD). The fuzzy REGD is used to precisely measure the structural information of image and to locate the optimal threshold desired by segmentation. The proposed approach draws upon the postulation that the optimal threshold concurs with maximum information content of the distribution. The contributions in the paper are as follow: Initially, the fuzzy REGD as a measure of the spatial structure of image is introduced. Then, we propose an efficient entropic segmentation approach using fuzzy REGD. However the proposed approach belongs to entropic segmentation approaches (i.e. these approaches are commonly applied to grayscale images), it is adapted to be viable for segmenting color images. Lastly, diverse experiments on real images that show the superior performance of the proposed method are carried out.Keywords: Entropy of generalized distributions, entropy fuzzification, entropic image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32322320 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization
Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun
Abstract:
This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.
Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20402319 Effectiveness of Business Software Systems Development and Enhancement Projects versus Work Effort Estimation Methods
Authors: Beata Czarnacka-Chrobot
Abstract:
Execution of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) is characterized by the exceptionally low effectiveness, leading to considerable financial losses. The general reason for low effectiveness of such projects is that they are inappropriately managed. One of the factors of proper BSS D&EP management is suitable (reliable and objective) method of project work effort estimation since this is what determines correct estimation of its major attributes: project cost and duration. BSS D&EP is usually considered to be accomplished effectively if product of a planned functionality is delivered without cost and time overrun. The goal of this paper is to prove that choosing approach to the BSS D&EP work effort estimation has a considerable influence on the effectiveness of such projects execution.
Keywords: Business software systems, development and enhancement projects, effectiveness, work effort estimation methods, software product size, software product functionality, project duration, project cost.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20842318 Enhancement of Raman Scattering using Photonic Nanojet and Whispering Gallery Mode of a Dielectric Microstructure
Authors: A. Arya, R. Laha, V. R. Dantham
Abstract:
We report the enhancement of Raman scattering signal by one order of magnitude using photonic nanojet (PNJ) of a lollipop shaped dielectric microstructure (LSDM) fabricated by a pulsed CO₂ laser. Here, the PNJ is generated by illuminating sphere portion of the LSDM with non-resonant laser. Unlike the surface enhanced Raman scattering (SERS) technique, this technique is simple, and the obtained results are highly reproducible. In addition, an efficient technique is proposed to enhance the SERS signal with the help of high quality factor optical resonance (whispering gallery mode) of a LSDM. From the theoretical simulations, it has been found that at least an order of magnitude enhancement in the SERS signal could be achieved easily using the proposed technique. We strongly believe that this report will enable the research community for improving the Raman scattering signals.Keywords: Localized surface plasmons, photonic nanojet, SERS, whispering gallery mode.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11162317 Outdoor Anomaly Detection with a Spectroscopic Line Detector
Authors: O. J. G. Somsen
Abstract:
One of the tasks of optical surveillance is to detect anomalies in large amounts of image data. However, if the size of the anomaly is very small, limited information is available to distinguish it from the surrounding environment. Spectral detection provides a useful source of additional information and may help to detect anomalies with a size of a few pixels or less. Unfortunately, spectral cameras are expensive because of the difficulty of separating two spatial in addition to one spectral dimension. We investigate the possibility of modifying a simple spectral line detector for outdoor detection. This may be especially useful if the area of interest forms a line, such as the horizon. We use a monochrome CCD that also enables detection into the near infrared. A simple camera is attached to the setup to determine which part of the environment is spectrally imaged. Our preliminary results indicate that sensitive detection of very small targets is indeed possible. Spectra could be taken from the various targets by averaging columns in the line image. By imaging a set of lines of various widths we found narrow lines that could not be seen in the color image but remained visible in the spectral line image. A simultaneous analysis of the entire spectra can produce better results than visual inspection of the line spectral image. We are presently developing calibration targets for spatial and spectral focusing and alignment with the spatial camera. This will present improved results and more use in outdoor application.Keywords: Anomaly detection, spectroscopic line imaging, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16462316 Visualization of Latent Sweat Fingerprints Deposit on Paper by Infrared Radiation and Blue Light
Authors: Xiaochun Huang, Xuejun Zhao, Yun Zou, Feiyu Yang, Wenbin Liu, Nan Deng, Ming Zhang, Nengbin Cai
Abstract:
A simple device termed infrared radiation (IR) was developed for rapid visualization of sweat fingerprints deposit on paper with blue light (450 nm, 11 W). In this approach, IR serves as the pretreatment device before the sweat fingerprints was illuminated by blue light. An annular blue light source was adopted for visualizing latent sweat fingerprints. Sample fingerprints were examined under various conditions after deposition, and experimental results indicate that the recovery rate of the latent sweat fingerprints is in the range of 50%-100% without chemical treatments. A mechanism for the observed visibility is proposed based on transportation and re-impregnation of fluorescer in paper at the region of water. And further exploratory experimental results gave the full support to the visible mechanism. Therefore, such a method as IR-pretreated in detecting latent fingerprints may be better for examination in the case where biological information of samples is needed for consequent testing.
Keywords: Forensic science, visualization, infrared radiation, blue light, latent sweat fingerprints, detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14632315 Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan
Authors: Khunsa Fatima, Umar K. Khattak, Allah Bakhsh Kausar
Abstract:
Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed upon both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. Result of maximum likelihood classification technique applied on ASTER satellite image has highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image.
Keywords: ASTER, Landsat-ETM+, Satellite, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29202314 A Modified Speech Enhancement Using Adaptive Gain Equalizer with Non linear Spectral Subtraction for Robust Speech Recognition
Authors: C. Ganesh Babu, P. T. Vanathi
Abstract:
In this paper we present an enhanced noise reduction method for robust speech recognition using Adaptive Gain Equalizer with Non linear Spectral Subtraction. In Adaptive Gain Equalizer method (AGE), the input signal is divided into a number of subbands that are individually weighed in time domain, in accordance to the short time Signal-to-Noise Ratio (SNR) in each subband estimation at every time instant. Instead of focusing on suppression the noise on speech enhancement is focused. When analysis was done under various noise conditions for speech recognition, it was found that Adaptive Gain Equalizer method algorithm has an obvious failing point for a SNR of -5 dB, with inadequate levels of noise suppression for SNR less than this point. This work proposes the implementation of AGE when coupled with Non linear Spectral Subtraction (AGE-NSS) for robust speech recognition. The experimental result shows that out AGE-NSS performs the AGE when SNR drops below -5db level.
Keywords: Adaptive Gain Equalizer, Non Linear Spectral Subtraction, Speech Enhancement, and Speech Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17022313 Image Indexing Using a Color Similarity Metric based on the Human Visual System
Authors: Angelo Nodari, Ignazio Gallo
Abstract:
The novelty proposed in this study is twofold and consists in the developing of a new color similarity metric based on the human visual system and a new color indexing based on a textual approach. The new color similarity metric proposed is based on the color perception of the human visual system. Consequently the results returned by the indexing system can fulfill as much as possibile the user expectations. We developed a web application to collect the users judgments about the similarities between colors, whose results are used to estimate the metric proposed in this study. In order to index the image's colors, we used a text indexing engine to facilitate the integration of visual features in a database of text documents. The textual signature is build by weighting the image's colors in according to their occurrence in the image. The use of a textual indexing engine, provide us a simple, fast and robust solution to index images. A typical usage of the system proposed in this study, is the development of applications whose data type is both visual and textual. In order to evaluate the proposed method we chose a price comparison engine as a case of study, collecting a series of commercial offers containing the textual description and the image representing a specific commercial offer.
Keywords: Color Extraction, Content-Based Image Retrieval, Indexing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30272312 VDGMSISS: A Verifiable and Detectable Multi-Secret Images Sharing Scheme with General Access Structure
Authors: Justie Su-Tzu Juan, Ming-Jheng Li, Ching-Fen Lee, Ruei-Yu Wu
Abstract:
A secret image sharing scheme is a way to protect images. The main idea is dispersing the secret image into numerous shadow images. A secret image sharing scheme can withstand the impersonal attack and achieve the highly practical property of multiuse is more practical. Therefore, this paper proposes a verifiable and detectable secret image-sharing scheme called VDGMSISS to solve the impersonal attack and to achieve some properties such as encrypting multi-secret images at one time and multi-use. Moreover, our scheme can also be used for any genera access structure.Keywords: Multi-secret images sharing scheme, verifiable, detectable, general access structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4522311 Color Image Segmentation Using Competitive and Cooperative Learning Approach
Authors: Yinggan Tang, Xinping Guan
Abstract:
Color image segmentation can be considered as a cluster procedure in feature space. k-means and its adaptive version, i.e. competitive learning approach are powerful tools for data clustering. But k-means and competitive learning suffer from several drawbacks such as dead-unit problem and need to pre-specify number of cluster. In this paper, we will explore to use competitive and cooperative learning approach to perform color image segmentation. In competitive and cooperative learning approach, seed points not only compete each other, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together, finally it can automatically select the correct number of cluster and avoid the dead-units problem. Experimental results show that CCL can obtain better segmentation result.Keywords: Color image segmentation, competitive learning, cluster, k-means algorithm, competitive and cooperative learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16162310 An Experimental Study on Holdup Measurement in Fluidized Bed by Light Transmission
Authors: E. Shahbazali, N. Afrasiabi, A. A. Safekordi
Abstract:
Nowadays, fluidized bed plays an important part in industry. The design of this kind of reactor requires knowing the interfacial area between two phases and this interfacial area leads to calculate the solid holdup in the bed. Consequently achieving interfacial area between gas and solid in the bed experimentally is so significant. On interfacial area measurement in fluidized bed with gas has been worked, but light transmission technique has been used less. Therefore, in the current research the possibility of using of this technique and its accuracy are investigated. Measuring, a fluidized bed was designed and the problems were averted as far as possible. By using fine solid with equal shape and diameter and installing an optical system, the absorption of light during the time of fluidization has been measured. Results indicate that this method that its validity has been proved in the gas-liquid system, by different reasons have less application in gas-solid system. One important reason could be non-uniformity in such systems.
Keywords: Fluidization, Holdup, Light Transmission, Two phase system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15032309 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework
Authors: T. P. Athira, Gibin Chacko George
Abstract:
This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21632308 Enhancing Children’s English Vocabulary Acquisition through Digital Storytelling at Happy Kids Kindergarten, Palembang, Indonesia
Authors: Gaya Tridinanti
Abstract:
Enhanching English vocabulary in early childhood is the main problem often faced by teachers. Thus, the purpose of this study was to determine the enhancement of children’s English vocabulary acquisition by using digital storytelling. This type of research was an action research. It consisted of a series of four activities done in repeated cycles: planning, implementation, observation, and reflection. The subject of the study consisted of 30 students of B group (5-6 years old) attending Happy Kids Kindergarten Palembang, Indonesia. This research was conducted in three cycles. The methods used for data collection were observation and documentation. Descriptive qualitative and quantitative methods were also used to analyse the data. The research showed that the digital storytelling learning activities could enhance the children’s English vocabulary acquisition. It is based on the data in which the enhancement in pre-cycle was 37% and 51% in Cycle I. In Cycle II it was 71% and in Cycle III it was 89.3%. The results showed an enhancement of about 14% from the pre-cycle to Cycle I, 20% from Cycle I to Cycle II, and enhancement of about 18.3% from Cycle II to Cycle III. The conclusion of this study suggests that digital storytelling learning method could enhance the English vocabulary acquisition of B group children at the Happy Kids Kindergarten Palembang. Therefore, digital storytelling can be considered as an alternative to improve English language learning in the classroom.Keywords: Acquisition, enhancing, digital storytelling, English vocabulary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16542307 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22612306 A CFD Study of Heat Transfer Enhancement in Pipe Flow with Al2O3 Nanofluid
Authors: P.Kumar
Abstract:
Fluids are used for heat transfer in many engineering equipments. Water, ethylene glycol and propylene glycol are some of the common heat transfer fluids. Over the years, in an attempt to reduce the size of the equipment and/or efficiency of the process, various techniques have been employed to improve the heat transfer rate of these fluids. Surface modification, use of inserts and increased fluid velocity are some examples of heat transfer enhancement techniques. Addition of milli or micro sized particles to the heat transfer fluid is another way of improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. In this work, the heat transfer enhancement using aluminium oxide nanofluid has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach.Keywords: Heat transfer intensification, nanofluid, CFD, friction factor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37962305 Using Self Organizing Feature Maps for Classification in RGB Images
Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami
Abstract:
Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feedforward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on selforganizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.Keywords: Classification, SOFM, neural network, RGB images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23192304 Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications
Authors: R. Senthilkumar
Abstract:
Most simple nonlinear thresholding rules for wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper compares different Shrinkage functions used for image-denoising. The second part of the paper compares different bivariate models and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill, Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values are calculated for each noise level.Keywords: BiShrink, Image-Denoising, PSNR, Shrinkage function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13472303 A Genetic Algorithm for Clustering on Image Data
Authors: Qin Ding, Jim Gasvoda
Abstract:
Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.
Keywords: Clustering, data mining, genetic algorithm, image data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20532302 Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation
Authors: Beata Czarnacka-Chrobot
Abstract:
Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.Keywords: Benchmarking data, business software systems development and enhancement projects, effort estimation, software engineering economics, software functional size measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15422301 Weld Defect Detection in Industrial Radiography Based Digital Image Processing
Authors: N. Nacereddine, M. Zelmat, S. S. Belaïfa, M. Tridi
Abstract:
Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.
Keywords: Digital image processing, global and localapproaches, radiographic film, weld defect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40722300 Image Compression Using Multiwavelet and Multi-Stage Vector Quantization
Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, P. Navaneethan
Abstract:
The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Keywords: Image compression, Multiwavelets, Multi-stagevector quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19362299 Analysis of Reflectance Photoplethysmograph Sensors
Authors: Fu-Hsuan Huang, Po-Jung Yuan, Kang-Ping Lin, Hen-Hong Chang, Cheng-Lun Tsai
Abstract:
Photoplethysmography is a simple measurement of the variation in blood volume in tissue. It detects the pulse signal of heart beat as well as the low frequency signal of vasoconstriction and vasodilation. The transmission type measurement is limited to only a few specific positions for example the index finger that have a short path length for light. The reflectance type measurement can be conveniently applied on most parts of the body surface. This study analyzed the factors that determine the quality of reflectance photoplethysmograph signal including the emitter-detector distance, wavelength, light intensity, and optical properties of skin tissue. Light emitting diodes (LEDs) with four different visible wavelengths were used as the light emitters. A phototransistor was used as the light detector. A micro translation stage adjusts the emitter-detector distance from 2 mm to 15 mm. The reflective photoplethysmograph signals were measured on different sites. The optimal emitter-detector distance was chosen to have a large dynamic range for low frequency drifting without signal saturation and a high perfusion index. Among these four wavelengths, a yellowish green (571nm) light with a proper emitter-detection distance of 2mm is the most suitable for obtaining a steady and reliable reflectance photoplethysmograph signalKeywords: Reflectance photoplethysmograph, Perfusion index, Signal-to-noise ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22122298 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17762297 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: A. Umemuro, M. Sato, M. Narita, S. Hori, S. Sakurai, T. Nakayama, A. Nakazawa, T. Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.
Keywords: EEG scanner, eye-detector, mammography, observers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3612296 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm
Authors: Ali Ridho Barakbah, Yasushi Kiyoki
Abstract:
This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33722295 A Study of Color Transformation on Website Images for the Color Blind
Authors: Siew-Li Ching, Maziani Sabudin
Abstract:
In this paper, we study on color transformation method on website images for the color blind. The most common category of color blindness is red-green color blindness which is viewed as beige color. By transforming the colors of the images, the color blind can improve their color visibility. They can have a better view when browsing through the websites. To transform colors on the website images, we study on two algorithms which are the conversion techniques from RGB color space to HSV color space and self-organizing color transformation. The comparative study focuses on criteria based on the ease of use, quality, accuracy and efficiency. The outcome of the study leads to enhancement of website images to meet the color blinds- vision requirements in perceiving image detailed.Keywords: Color blind, color transformation, HSV (Hue, Saturation, Value), RGB (Red, Green, Blue).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26542294 Extent of Highway Capacity Loss Due to Rainfall
Authors: Hashim Mohammed Alhassan, Johnnie Ben-Edigbe
Abstract:
Traffic flow in adverse weather conditions have been investigated in this study for general traffic, week day and week end traffic. The empirical evidence is strong in support of the view that rainfall affects macroscopic traffic flow parameters. Data generated from a basic highway section along J5 in Johor Bahru, Malaysia was synchronized with 161 rain events over a period of three months. This revealed a 4.90%, 6.60% and 11.32% reduction in speed for light rain, moderate rain and heavy rain conditions respectively. The corresponding capacity reductions in the three rainfall regimes are 1.08% for light rain, 6.27% for moderate rain and 29.25% for heavy rain. In the week day traffic, speed drops of 8.1% and 16.05% were observed for light and heavy conditions. The moderate rain condition speed increased by 12.6%. The capacity drops for week day traffic are 4.40% for light rain, 9.77% for moderate rain and 45.90% for heavy rain. The weekend traffic indicated speed difference between the dry condition and the three rainy conditions as 6.70% for light rain, 8.90% for moderate rain and 13.10% for heavy rain. The capacity changes computed for the weekend traffic were 0.20% in light rain, 13.90% in moderate rain and 16.70% in heavy rain. No traffic instabilities were observed throughout the observation period and the capacities reported for each rain condition were below the norain condition capacity. Rainfall has tremendous impact on traffic flow and this may have implications for shock wave propagation.
Keywords: Highway Capacity, Dry condition, Rainfall Intensity, Rainy condition, Traffic Flow Rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20762293 Hit-or-Miss Transform as a Tool for Similar Shape Detection
Authors: Osama Mohamed Elrajubi, Idris El-Feghi, Mohamed Abu Baker Saghayer
Abstract:
This paper describes an identification of specific shapes within binary images using the morphological Hit-or-Miss Transform (HMT). Hit-or-Miss transform is a general binary morphological operation that can be used in searching of particular patterns of foreground and background pixels in an image. It is actually a basic operation of binary morphology since almost all other binary morphological operators are derived from it. The input of this method is a binary image and a structuring element (a template which will be searched in a binary image) while the output is another binary image. In this paper a modification of Hit-or-Miss transform has been proposed. The accuracy of algorithm is adjusted according to the similarity of the template and the sought template. The implementation of this method has been done by C language. The algorithm has been tested on several images and the results have shown that this new method can be used for similar shape detection.
Keywords: Hit-or/and-Miss Operator/Transform, HMT, binary morphological operation, shape detection, binary images processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51272292 Thermal Performance Analysis of Nanofluids in a Concetric Heat Exchanger Equipped with Turbulators
Authors: Feyza Eda Akyurek, Bayram Sahin, Kadir Gelis, Eyuphan Manay, Murat Ceylan
Abstract:
Turbulent forced convection heat transfer and pressure drop characteristics of Al2O3–water nanofluid flowing through a concentric tube heat exchanger with and without coiled wire turbulators were studied experimentally. The experiments were conducted in the Reynolds number ranging from 4000 to 20000, particle volume concentrations of 0.8 vol.% and 1.6 vol.%. Two turbulators with the pitches of 25 mm and 39 mm were used. The results of nanofluids indicated that average Nusselt number increased much more with increasing Reynolds number compared to that of pure water. Thermal conductivity enhancement by the nanofluids resulted in heat transfer enhancement. Once the pressure drop of the alumina/water nanofluid was analyzed, it was nearly equal to that of pure water at the same Reynolds number range. It was concluded that nanofluids with the volume fractions of 0.8 and 1.6 did not have a significant effect on pressure drop change. However, the use of wire coils in heat exchanger enhanced heat transfer as well as the pressure drop.
Keywords: Turbulators, heat exchanger, nanofluids, heat transfer enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659