Search results for: Satellite image fusion
1502 An Additive Watermarking Technique in Gray Scale Images Using Discrete Wavelet Transformation and Its Analysis on Watermark Strength
Authors: Kamaldeep Joshi, Rajkumar Yadav, Ashok Kumar Yadav
Abstract:
Digital Watermarking is a procedure to prevent the unauthorized access and modification of personal data. It assures that the communication between two parties remains secure and their communication should be undetected. This paper investigates the consequence of the watermark strength of the grayscale image using a Discrete Wavelet Transformation (DWT) additive technique. In this method, the gray scale host image is divided into four sub bands: LL (Low-Low), HL (High-Low), LH (Low-High), HH (High-High) and the watermark is inserted in an LL sub band using DWT technique. As the image is divided into four sub bands, a watermark of equal size of the LL sub band has been inserted and the results are discussed. LL represents the average component of the host image which contains the maximum information of the image. Two kinds of experiments are performed. In the first, the same watermark is embedded in different images and in the later on the strength of the watermark varies by a factor of s i.e. (s=10, 20, 30, 40, 50) and it is inserted in the same image.
Keywords: Watermarking, discrete wavelet transform, scaling factor, steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14411501 Application of l1-Norm Minimization Technique to Image Retrieval
Authors: C. S. Sastry, Saurabh Jain, Ashish Mishra
Abstract:
Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.
Keywords: l1-norm minimization, content based retrieval, modified Gabor function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34311500 Enhance Image Transmission Based on DWT with Pixel Interleaver
Authors: Muhanned Alfarras
Abstract:
The recent growth of using multimedia transmission over wireless communication systems, have challenges to protect the data from lost due to wireless channel effect. Images are corrupted due to the noise and fading when transmitted over wireless channel, in wireless channel the image is transmitted block by block, Due to severe fading, entire image blocks can be damaged. The aim of this paper comes out from need to enhance the digital images at the wireless receiver side. Proposed Boundary Interpolation (BI) Algorithm using wavelet, have been adapted here used to reconstruction the lost block in the image at the receiver depend on the correlation between the lost block and its neighbors. New Proposed technique by using Boundary Interpolation (BI) Algorithm using wavelet with Pixel interleaver has been implemented. Pixel interleaver work on distribute the pixel to new pixel position of original image before transmitting the image. The block lost through wireless channel is only effects individual pixel. The lost pixels at the receiver side can be recovered by using Boundary Interpolation (BI) Algorithm using wavelet. The results showed that the New proposed algorithm boundary interpolation (BI) using wavelet with pixel interleaver is better in term of MSE and PSNR.Keywords: Image Transmission, Wavelet, Pixel Interleaver, Boundary Interpolation Algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15931499 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9111498 Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise
Authors: Kamaldeep Joshi, Rajkumar Yadav, Sachin Allwadhi
Abstract:
Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.Keywords: LSB, watermarking, salt and pepper, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10511497 A Robust Image Watermarking Scheme using Image Moment Normalization
Authors: Latha Parameswaran, K. Anbumani
Abstract:
Multimedia security is an incredibly significant area of concern. A number of papers on robust digital watermarking have been presented, but there are no standards that have been defined so far. Thus multimedia security is still a posing problem. The aim of this paper is to design a robust image-watermarking scheme, which can withstand a different set of attacks. The proposed scheme provides a robust solution integrating image moment normalization, content dependent watermark and discrete wavelet transformation. Moment normalization is useful to recover the watermark even in case of geometrical attacks. Content dependent watermarks are a powerful means of authentication as the data is watermarked with its own features. Discrete wavelet transforms have been used as they describe image features in a better manner. The proposed scheme finds its place in validating identification cards and financial instruments.Keywords: Watermarking, moments, wavelets, content-based, benchmarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15451496 Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion
Authors: Bing-Fei Wu, Yen-Lin Chen, Chung-Cheng Chiu
Abstract:
In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
Keywords: image segmentation, multilevel thresholding, clustering, discriminant analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20341495 A Robust Method for Encrypted Data Hiding Technique Based on Neighborhood Pixels Information
Authors: Ali Shariq Imran, M. Younus Javed, Naveed Sarfraz Khattak
Abstract:
This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.
Keywords: Data hiding, image processing, information security, stagonography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23401494 Detection and Pose Estimation of People in Images
Authors: Mousa Mojarrad, Amir Masoud Rahmani, Mehrab Mohebi
Abstract:
Detection, feature extraction and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes and the high dimensionality of articulated body models and also the important field in Image, Signal and Vision Computing in recent years. In this paper, four types of people in 2D dimension image will be tested and proposed. The system will extract the size and the advantage of them (such as: tall fat, short fat, tall thin and short thin) from image. Fat and thin, according to their result from the human body that has been extract from image, will be obtained. Also the system extract every size of human body such as length, width and shown them in output.Keywords: Analysis of Image Processing, Canny Edge Detection, Human Body Recognition, Measurement, Pose Estimation, 2D Human Dimension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22991493 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Keywords: Camera calibration, Ice detection, ice load measurements, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12551492 A Framework for the Analysis of the Stereotypes in Accounting
Authors: Nadia Albu, Cătălin Nicolae Albu, Mădălina Maria Gîrbină, Maria Iuliana Sandu
Abstract:
Professions are concerned about the public image they have, and this public image is represented by stereotypes. Research is needed to understand how accountants are perceived by different actors in the society in different contexts, which would allow universities, professional bodies and employers to adjust their strategies to attract the right people to the profession and their organizations. We aim to develop in this paper a framework to be used in empirical testing in different environments to determine and analyze the accountant-s stereotype. This framework will be useful in analyzing the nuances associated to the accountant-s image and in understanding the factors that may lead to uniformity in the profession and of those leading to diversity from one context (country, type of countries, region) to another.Keywords: accounting profession, accounting stereotype, framework, public image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31231491 Molecular Dynamics Simulation of Thermal Properties of Au3Ni Nanowire
Authors: J. Davoodi, F. Katouzi
Abstract:
The aim of this research was to calculate the thermal properties of Au3Ni Nanowire. The molecular dynamics (MD) simulation technique was used to obtain the effect of radius size on the energy, the melting temperature and the latent heat of fusion at the isobaric-isothermal (NPT) ensemble. The Quantum Sutton-Chen (Q-SC) many body interatomic potentials energy have been used for Gold (Au) and Nickel (Ni) elements and a mixing rule has been devised to obtain the parameters of these potentials for nanowire stats. Our MD simulation results show the melting temperature and latent heat of fusion increase upon increasing diameter of nanowire. Moreover, the cohesive energy decreased with increasing diameter of nanowire.Keywords: Au3Ni Nanowire, Thermal properties, Molecular dynamics simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20061490 HSV Image Watermarking Scheme Based on Visual Cryptography
Authors: Rawan I. Zaghloul, Enas F. Al-Rawashdeh
Abstract:
In this paper a simple watermarking method for color images is proposed. The proposed method is based on watermark embedding for the histograms of the HSV planes using visual cryptography watermarking. The method has been proved to be robust for various image processing operations such as filtering, compression, additive noise, and various geometrical attacks such as rotation, scaling, cropping, flipping, and shearing.Keywords: Histogram, HSV image, Visual Cryptography, Watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19681489 Grouping and Indexing Color Features for Efficient Image Retrieval
Authors: M. V. Sudhamani, C. R. Venugopal
Abstract:
Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Keywords: Content-based, indexing, cluster, region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18101488 New Graph Similarity Measurements based on Isomorphic and Nonisomorphic Data Fusion and their Use in the Prediction of the Pharmacological Behavior of Drugs
Authors: Irene Luque Ruiz, Manuel Urbano Cuadrado, Miguel Ángel Gómez-Nieto
Abstract:
New graph similarity methods have been proposed in this work with the aim to refining the chemical information extracted from molecules matching. For this purpose, data fusion of the isomorphic and nonisomorphic subgraphs into a new similarity measure, the Approximate Similarity, was carried out by several approaches. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting several pharmacological parameters: binding of steroids to the globulin-corticosteroid receptor, the activity of benzodiazepine receptor compounds, and the blood brain barrier permeability. Acceptable results were obtained for the models presented here.
Keywords: Graph similarity, Nonisomorphic dissimilarity, Approximate similarity, Drug activity prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16491487 Active Contours with Prior Corner Detection
Authors: U.A.A. Niroshika, Ravinda G.N. Meegama
Abstract:
Deformable active contours are widely used in computer vision and image processing applications for image segmentation, especially in biomedical image analysis. The active contour or “snake" deforms towards a target object by controlling the internal, image and constraint forces. However, if the contour initialized with a lesser number of control points, there is a high probability of surpassing the sharp corners of the object during deformation of the contour. In this paper, a new technique is proposed to construct the initial contour by incorporating prior knowledge of significant corners of the object detected using the Harris operator. This new reconstructed contour begins to deform, by attracting the snake towards the targeted object, without missing the corners. Experimental results with several synthetic images show the ability of the new technique to deal with sharp corners with a high accuracy than traditional methods.Keywords: Active Contours, Image Segmentation, Harris Operator, Snakes
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22801486 Pulsed Multi-Layered Image Filtering: A VLSI Implementation
Authors: Christian Mayr, Holger Eisenreich, Stephan Henker, René Schüffny
Abstract:
Image convolution similar to the receptive fields found in mammalian visual pathways has long been used in conventional image processing in the form of Gabor masks. However, no VLSI implementation of parallel, multi-layered pulsed processing has been brought forward which would emulate this property. We present a technical realization of such a pulsed image processing scheme. The discussed IC also serves as a general testbed for VLSI-based pulsed information processing, which is of interest especially with regard to the robustness of representing an analog signal in the phase or duration of a pulsed, quasi-digital signal, as well as the possibility of direct digital manipulation of such an analog signal. The network connectivity and processing properties are reconfigurable so as to allow adaptation to various processing tasks.Keywords: Neural image processing, pulse computation application, pulsed Gabor convolution, VLSI pulse routing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13901485 A Sub Pixel Resolution Method
Authors: S. Khademi, A. Darudi, Z. Abbasi
Abstract:
One of the main limitations for the resolution of optical instruments is the size of the sensor-s pixels. In this paper we introduce a new sub pixel resolution algorithm to enhance the resolution of images. This method is based on the analysis of multiimages which are fast recorded during the fine relative motion of image and pixel arrays of CCDs. It is shown that by applying this method for a sample noise free image one will enhance the resolution with 10-14 order of error.Keywords: Sub Pixel Resolution, Moving Pixels, CCD, Image, Optical Instrument.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19931484 A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms
Authors: Parisut Jitpakdee, Pakinee Aimmanee, Bunyarit Uyyanonvara
Abstract:
Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.Keywords: Clustering, Color quantization, Firefly algorithm, Kmeans.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22181483 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14581482 A Review of in-orbit Observations of Radiation- Induced Effects in Commercial Memories onboard Alsat-1
Authors: Y. Bentoutou, A.M. Si Mohammed
Abstract:
This paper presents a review of an 8-year study on radiation effects in commercial memory devices operating within the main on-board computer system OBC386 of the Algerian microsatellite Alsat-1. A statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in these commercial memories shows that the typical SEU rate at alsat-1's orbit is 4.04 × 10-7 SEU/bit/day, where 98.6% of these SEUs cause single-bit errors, 1.22% cause double-byte errors, and the remaining SEUs result in multiple-bit and severe errors.
Keywords: Radiation effects, error detection and correction, satellite computer, small satellite mission.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18881481 Performance Evaluation of Compression Algorithms for Developing and Testing Industrial Imaging Systems
Authors: Daniel F. Garcia, Julio Molleda, Francisco Gonzalez, Ruben Usamentiaga
Abstract:
The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.Keywords: Lossless image compression, codec performanceevaluation, grayscale codec comparison, real-time image recording.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14181480 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14911479 Analysis of Patterns in TV Commercials that Recognize NGO Image
Authors: J. Areerut, F. Samuel
Abstract:
The purpose of this research is to analyze the pattern of television commercials and how they encourage non-governmental organizations to build their image in Thailand. It realizes how public relations can impact an organization's image. It is a truth that bad public relations management can cause hurt a reputation. On the other hand, a very small amount of work in public relations helps your organization to be recognized broadly and eventually accepted even wider. The main idea in this paper is to study and analyze patterns of television commercials that could impact non-governmental organization's images in a greater way. This research uses questionnaires and content analysis to summarize results. The findings show the aspects of how patterns of television commercials that are suited to non-governmental organization work in Thailand. It will be useful for any non-governmental organization that wishes to build their image through television commercials and also for further work based on this research.
Keywords: Television Commercial (TVC), Organization Image, Non-Governmental Organization: NGO, Public Relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23851478 Program Memories Error Detection and Correction On-Board Earth Observation Satellites
Authors: Y. Bentoutou
Abstract:
Memory Errors Detection and Correction aim to secure the transaction of data between the central processing unit of a satellite onboard computer and its local memory. In this paper, the application of a double-bit error detection and correction method is described and implemented in Field Programmable Gate Array (FPGA) technology. The performance of the proposed EDAC method is measured and compared with two different EDAC devices, using the same FPGA technology. Statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in commercial memories onboard the first Algerian microsatellite Alsat-1 is given.
Keywords: Error Detection and Correction, On-board computer, small satellite missions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22201477 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9151476 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16751475 Fuzzy Based Visual Texture Feature for Psoriasis Image Analysis
Authors: G. Murugeswari, A. Suruliandi
Abstract:
This paper proposes a rotational invariant texture feature based on the roughness property of the image for psoriasis image analysis. In this work, we have applied this feature for image classification and segmentation. The fuzzy concept is employed to overcome the imprecision of roughness. Since the psoriasis lesion is modeled by a rough surface, the feature is extended for calculating the Psoriasis Area Severity Index value. For classification and segmentation, the Nearest Neighbor algorithm is applied. We have obtained promising results for identifying affected lesions by using the roughness index and severity level estimation.
Keywords: Fuzzy texture feature, psoriasis, roughness feature, skin disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21141474 National Image in the Age of Mass Self-Communication: An Analysis of Internet Users' Perception of Portugal
Authors: L. Godinho, N. Teixeira
Abstract:
Nowadays, massification of Internet access represents one of the major challenges to the traditional powers of the State, among which the power to control its external image. The virtual world has also sparked the interest of social sciences which consider it a new field of study, an immense open text where sense is expressed. In this paper, that immense text has been accessed to so as to understand the perception Internet users from all over the world have of Portugal. Ours is a quantitative and qualitative approach, as we have resorted to buzz, thematic and category analysis. The results confirm the predominance of sea stereotype in others' vision of the Portuguese people, and evidence that national image has adapted to network communication through processes of individuation and paganization.Keywords: Internet, national image, perception, web analytics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10551473 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749