Search results for: Super Resolution Image Reconstruction
318 Implementation of Sprite Animation for Multimedia Application
Authors: Ms. Yi Mon Thant
Abstract:
Animation is simply defined as the sequencing of a series of static images to generate the illusion of movement. Most people believe that actual drawings or creation of the individual images is the animation, when in actuality it is the arrangement of those static images that conveys the motion. To become an animator, it is often assumed that needed the ability to quickly design masterpiece after masterpiece. Although some semblance of artistic skill is a necessity for the job, the real key to becoming a great animator is in the comprehension of timing. This paper will use a combination of sprite animation, frame animation, and some other techniques to cause a group of multi-colored static images to slither around in the bounded area. In addition to slithering, the images will also change the color of different parts of their body, much like the real world creatures that have this amazing ability to change the colors on their bodies do. This paper was implemented by using Java 2 Standard Edition (J2SE). It is both time-consuming and expensive to create animations, regardless if they are created by hand or by using motion-capture equipment. If the animators could reuse old animations and even blend different animations together, a lot of work would be saved in the process. The main objective of this paper is to examine a method for blending several animations together in real time. This paper presents and analyses a solution using Weighted Skeleton Animation (WSA) resulting in limited CPU time and memory waste as well as saving time for the animators. The idea presented is described in detail and implemented. In this paper, text animation, vertex animation, sprite part animation and whole sprite animation were tested. In this research paper, the resolution, smoothness and movement of animated images will be carried out from the parameters, which will be obtained from the experimental research of implementing this paper.Keywords: Weighted Skeleton Animation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832317 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: Comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541316 Comparing Hilditch, Rosenfeld, Zhang-Suen,and Nagendraprasad -Wang-Gupta Thinning
Authors: Anastasia Rita Widiarti
Abstract:
This paper compares Hilditch, Rosenfeld, Zhang- Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms for Javanese character image recognition. Thinning is an effective process when the focus in not on the size of the pattern, but rather on the relative position of the strokes in the pattern. The research analyzes the thinning of 60 Javanese characters. Time-wise, Zhang-Suen algorithm gives the best results with the average process time being 0.00455188 seconds. But if we look at the percentage of pixels that meet one-pixel thickness, Rosenfelt algorithm gives the best results, with a 99.98% success rate. From the number of pixels that are erased, NWG algorithm gives the best results with the average number of pixels erased being 84.12%. It can be concluded that the Hilditch algorithm performs least successfully compared to the other three algorithms.Keywords: Hilditch algorithm, Nagendraprasad-Wang-Guptaalgorithm, Rosenfeld algorithm, Thinning, Zhang-suen algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3918315 Computer Aided Detection on Mammography
Authors: Giovanni Luca Masala
Abstract:
A typical definition of the Computer Aided Diagnosis (CAD), found in literature, can be: A diagnosis made by a radiologist using the output of a computerized scheme for automated image analysis as a diagnostic aid. Often it is possible to find the expression Computer Aided Detection (CAD or CADe): this definition emphasizes the intent of CAD to support rather than substitute the human observer in the analysis of radiographic images. In this article we will illustrate the application of CAD systems and the aim of these definitions. Commercially available CAD systems use computerized algorithms for identifying suspicious regions of interest. In this paper are described the general CAD systems as an expert system constituted of the following components: segmentation / detection, feature extraction, and classification / decision making. As example, in this work is shown the realization of a Computer- Aided Detection system that is able to assist the radiologist in identifying types of mammary tumor lesions. Furthermore this prototype of station uses a GRID configuration to work on a large distributed database of digitized mammographic images.Keywords: Computer Aided Detection, Computer Aided Diagnosis, mammography, GRID.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927314 PM10 Chemical Characteristics in a Background Site at the Universidad Libre Bogotá
Authors: Laura X. Martinez, Andrés F. Rodríguez, Ruth A. Catacoli
Abstract:
One of the most important factors for air pollution is that the concentrations of PM10 maintain a constant trend, with the exception of some places where that frequently surpasses the allowed ranges established by Colombian legislation. The community that surrounds the Universidad Libre Bogotá is inhabited by a considerable number of students and workers, all of whom are possibly being exposed to PM10 for long periods of time while on campus. Thus, the chemical characterization of PM10 found in the ambient air at the Universidad Libre Bogotá was identified as a problem. A Hi-Vol sampler and EPA Test Method 5 were used to determine if the quality of air is adequate for the human respiratory system. Additionally, quartz fiber filters were utilized during sampling. Samples were taken three days a week during a dry period throughout the months of November and December 2015. The gravimetric analysis method was used to determine PM10 concentrations. The chemical characterization includes non-conventional carcinogenic pollutants. Atomic absorption spectrophotometry (AAS) was used for the determination of metals and VOCs were analyzed using the FTIR (Fourier transform infrared spectroscopy) method. In this way, concentrations of PM10, ranging from values of 13 µg/m3 to 66 µg/m3, were obtained; these values were below standard conditions. This evidence concludes that the PM10 concentrations during an exposure period of 24 hours are lower than the values established by Colombian law, Resolution 610 of 2010; however, when comparing these with the limits set by the World Health Organization (WHO), these concentrations could possibly exceed permissible levels.Keywords: Air quality, atomic absorption spectrophotometry, Fourier transform infrared spectroscopy, particulate matter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914313 Degradation of Heating, Ventilation, and Air Conditioning Components across Locations
Authors: Timothy E. Frank, Josh R. Aldred, Sophie B. Boulware, Michelle K. Cabonce, Justin H. White
Abstract:
Materials degrade at different rates in different environments depending on factors such as temperature, aridity, salinity, and solar radiation. Therefore, predicting asset longevity depends, in part, on the environmental conditions to which the asset is exposed. Heating, ventilation, and air conditioning (HVAC) systems are critical to building operations yet are responsible for a significant proportion of their energy consumption. HVAC energy use increases substantially with slight operational inefficiencies. Understanding the environmental influences on HVAC degradation in detail will inform maintenance schedules and capital investment, reduce energy use, and increase lifecycle management efficiency. HVAC inspection records spanning 14 years from 21 locations across the United States were compiled and associated with the climate conditions to which they were exposed. Three environmental features were explored in this study: average high temperature, average low temperature, and annual precipitation, as well as four non-environmental features. Initial insights showed no correlations between individual features and the rate of HVAC component degradation. Using neighborhood component analysis, however, the most critical features related to degradation were identified. Two models were considered, and results varied between them. However, longitude and latitude emerged as potentially the best predictors of average HVAC component degradation. Further research is needed to evaluate additional environmental features, increase the resolution of the environmental data, and develop more robust models to achieve more conclusive results.
Keywords: Climate, infrastructure degradation, HVAC, neighborhood component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172312 Connect among Green, Sustainability and Hotel Industry: A Prospective Simulation Study
Authors: Leena N. Fukey, Surya S. Issac
Abstract:
This review paper aims at understanding the importance of implementing sustainable green practices in the current hotel industry and the perception of the same from the point of view of the customers as well as the industry experts. Many hotels have benefited from green management such as enhanced reputation of the firm and more worth customers. For the business standing, it reduces business’s cost for posting advertisements and the clear hotel’s orientation shows hotels’ positive image which might increase employees’ recognition toward the business. Sustainability in business is the growth in lively processes which enable people to understand the potential to protect the Earth’s existent support systems. Well, looking to the future today’s green concerns will definitely become facet of more synchronized business environment, perhaps the concerns discussed in this study, may exchange a few words which hotels may consider in near future to widen awareness and improve business model.
Keywords: Environmental Protection, Green Hotel Concept, Hotel Industry, Sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8861311 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.
Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718310 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1098309 ACTN3 Genotype Association with Motoric Performance of Roma Children
Authors: J. Bernasovska, I. Boronova, J. Poracova, M. Mydlarova Blascakova, V. Szabadosova, P. Ruzbarsky, E. Petrejcikova, I. Bernasovsky
Abstract:
The paper presents the results of the molecular genetics analysis in sports research, with special emphasis to use genetic information in diagnosing of motoric predispositions in Roma boys from East Slovakia. The ability and move are the basic characteristics of all living organisms. The phenotypes are influenced by a combination of genetic and environmental factors. Genetic tests differ in principle from the traditional motoric tests, because the DNA of an individual does not change during life. The aim of the presented study was to examine motion abilities and to determine the frequency of ACTN3 (R577X) gene in Roma children. Genotype data were obtained from 138 Roma and 155 Slovak boys from 7 to 15 years old. Children were investigated on physical performance level in association with their genotype. Biological material for genetic analyses comprised samples of buccal swabs. Genotypes were determined using Real Time High resolution melting PCR method (Rotor-Gene 6000 Corbett and Light Cycler 480 Roche). The software allows creating reports of any analysis, where information of the specific analysis, normalized and differential graphs and many information of the samples are shown. Roma children of analyzed group legged to non-Romany children at the same age in all the compared tests. The % distribution of R and X alleles in Roma children was different from controls. The frequency of XX genotype was 9.26%, RX 46.33% and RR was 44.41%. The frequency of XX genotype was 9.26% which is comparable to a frequency of an Indian population. Data were analyzed with the ANOVA test.Keywords: ACTN3 gene, R577X polymorphism, Roma children, Slovakia, sports performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1206308 Supervisory Fuzzy Learning Control for Underwater Target Tracking
Authors: C.Kia, M.R.Arshad, A.H.Adom, P.A.Wilson
Abstract:
This paper presents recent work on the improvement of the robotics vision based control strategy for underwater pipeline tracking system. The study focuses on developing image processing algorithms and a fuzzy inference system for the analysis of the terrain. The main goal is to implement the supervisory fuzzy learning control technique to reduce the errors on navigation decision due to the pipeline occlusion problem. The system developed is capable of interpreting underwater images containing occluded pipeline, seabed and other unwanted noise. The algorithm proposed in previous work does not explore the cooperation between fuzzy controllers, knowledge and learnt data to improve the outputs for underwater pipeline tracking. Computer simulations and prototype simulations demonstrate the effectiveness of this approach. The system accuracy level has also been discussed.Keywords: Fuzzy logic, Underwater target tracking, Autonomous underwater vehicles, Artificial intelligence, Simulations, Robot navigation, Vision system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898307 Edge Detection in Digital Images Using Fuzzy Logic Technique
Authors: Abdallah A. Alshennawy, Ayman A. Aly
Abstract:
The fuzzy technique is an operator introduced in order to simulate at a mathematical level the compensatory behavior in process of decision making or subjective evaluation. The following paper introduces such operators on hand of computer vision application. In this paper a novel method based on fuzzy logic reasoning strategy is proposed for edge detection in digital images without determining the threshold value. The proposed approach begins by segmenting the images into regions using floating 3x3 binary matrix. The edge pixels are mapped to a range of values distinct from each other. The robustness of the proposed method results for different captured images are compared to those obtained with the linear Sobel operator. It is gave a permanent effect in the lines smoothness and straightness for the straight lines and good roundness for the curved lines. In the same time the corners get sharper and can be defined easily.Keywords: Fuzzy logic, Edge detection, Image processing, computer vision, Mechanical parts, Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4768306 An Improved Algorithm for Calculation of the Third-order Orthogonal Tensor Product Expansion by Using Singular Value Decomposition
Authors: Chiharu Okuma, Naoki Yamamoto, Jun Murakami
Abstract:
As a method of expanding a higher-order tensor data to tensor products of vectors we have proposed the Third-order Orthogonal Tensor Product Expansion (3OTPE) that did similar expansion as Higher-Order Singular Value Decomposition (HOSVD). In this paper we provide a computation algorithm to improve our previous method, in which SVD is applied to the matrix that constituted by the contraction of original tensor data and one of the expansion vector obtained. The residual of the improved method is smaller than the previous method, truncating the expanding tensor products to the same number of terms. Moreover, the residual is smaller than HOSVD when applying to color image data. It is able to be confirmed that the computing time of improved method is the same as the previous method and considerably better than HOSVD.
Keywords: Singular value decomposition (SVD), higher-orderSVD (HOSVD), outer product expansion, power method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690305 Localisation of Anatomical Soft Tissue Landmarks of the Head in CT Images
Authors: M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs
Abstract:
In this paper, algorithms for the automatic localisation of two anatomical soft tissue landmarks of the head the medial canthus (inner corner of the eye) and the tragus (a small, pointed, cartilaginous flap of the ear), in CT images are describet. These landmarks are to be used as a basis for an automated image-to-patient registration system we are developing. The landmarks are localised on a surface model extracted from CT images, based on surface curvature and a rule based system that incorporates prior knowledge of the landmark characteristics. The approach was tested on a dataset of near isotropic CT images of 95 patients. The position of the automatically localised landmarks was compared to the position of the manually localised landmarks. The average difference was 1.5 mm and 0.8 mm for the medial canthus and tragus, with a maximum difference of 4.5 mm and 2.6 mm respectively.The medial canthus and tragus can be automatically localised in CT images, with performance comparable to manual localisationKeywords: Anatomical soft tissue landmarks, automatic localisation, Computed Tomography (CT)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844304 Human Facial Expression Recognition using MANFIS Model
Authors: V. Gomathi, Dr. K. Ramar, A. Santhiyaku Jeevakumar
Abstract:
Facial expression analysis plays a significant role for human computer interaction. Automatic analysis of human facial expression is still a challenging problem with many applications. In this paper, we propose neuro-fuzzy based automatic facial expression recognition system to recognize the human facial expressions like happy, fear, sad, angry, disgust and surprise. Initially facial image is segmented into three regions from which the uniform Local Binary Pattern (LBP) texture features distributions are extracted and represented as a histogram descriptor. The facial expressions are recognized using Multiple Adaptive Neuro Fuzzy Inference System (MANFIS). The proposed system designed and tested with JAFFE face database. The proposed model reports 94.29% of classification accuracy.Keywords: Adaptive neuro-fuzzy inference system, Facialexpression, Local binary pattern, Uniform Histogram
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2103303 Identification of Healthy and BSR-Infected Oil Palm Trees Using Color Indices
Authors: Siti Khairunniza-Bejo, Yusnida Yusoff, Nik Salwani Nik Yusoff, Idris Abu Seman, Mohamad Izzuddin Anuar
Abstract:
Most of the oil palm plantations have been threatened by Basal Stem Rot (BSR) disease which causes serious economic impact. This study was conducted to identify the healthy and BSRinfected oil palm tree using thirteen color indices. Multispectral and thermal camera was used to capture 216 images of the leaves taken from frond number 1, 9 and 17. Indices of normalized difference vegetation index (NDVI), red (R), green (G), blue (B), near infrared (NIR), green – blue (GB), green/blue (G/B), green – red (GR), green/red (G/R), hue (H), saturation (S), intensity (I) and thermal index (T) were used. From this study, it can be concluded that G index taken from frond number 9 is the best index to differentiate between the healthy and BSR-infected oil palm trees. It not only gave high value of correlation coefficient (R=-0.962), but also high value of separation between healthy and BSR-infected oil palm tree. Furthermore, power and S model developed using G index gave the highest R2 value which is 0.985.Keywords: Oil palm, image processing, disease, leaves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2960302 Feature Extraction for Surface Classification – An Approach with Wavelets
Authors: Smriti H. Bhandari, S. M. Deshpande
Abstract:
Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Keywords: Dual-tree complex wavelet transform, surface metrology, surface roughness, texture classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244301 Evolutionary Eigenspace Learning using CCIPCA and IPCA for Face Recognition
Authors: Ghazy M.R. Assassa, Mona F. M. Mursi, Hatim A. Aboalsamh
Abstract:
Traditional principal components analysis (PCA) techniques for face recognition are based on batch-mode training using a pre-available image set. Real world applications require that the training set be dynamic of evolving nature where within the framework of continuous learning, new training images are continuously added to the original set; this would trigger a costly continuous re-computation of the eigen space representation via repeating an entire batch-based training that includes the old and new images. Incremental PCA methods allow adding new images and updating the PCA representation. In this paper, two incremental PCA approaches, CCIPCA and IPCA, are examined and compared. Besides, different learning and testing strategies are proposed and applied to the two algorithms. The results suggest that batch PCA is inferior to both incremental approaches, and that all CCIPCAs are practically equivalent.Keywords: Candid covariance-free incremental principal components analysis (CCIPCA), face recognition, incremental principal components analysis (IPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822300 Image Analysis of Fine Structures of Supercavitation in the Symmetric Wake of a Cylinder
Authors: Y. Obikane , M.Kaneko, K.Kakioka, K.Ogura
Abstract:
The fine structure of supercavitation in the wake of a symmetrical cylinder is studied with high-speed video cameras. The flow is observed in a cavitation tunnel at the speed of 8m/sec when the sidewall and the wake are partially filled with the massive cavitation bubbles. The present experiment observed that a two-dimensional ripple wave with a wave length of 0.3mm is propagated in a downstream direction, and then abruptly increases to a thicker three-dimensional layer. IR-photography recorded that the wakes originated from the horseshoe vortexes alongside the cylinder. The wake was developed to inside the dead water zone, which absorbed the bubbly wake propelled from the separated vortices at the center of the cylinder. A remote sensing classification technique (maximum most likelihood) determined that the surface porosity was 0.2, and the mean speed in the mixed wake was 7m/sec. To confirm the existence of two-dimensional wave motions in the interface, the experiments were conducted at a very low frequency, and showed similar gravity waves in both the upper and lower interfaces.Keywords: Supercavitation, density gradient correlation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524299 Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries
Authors: Elham Alaee, Mousa Shamsi, Hossein Ahmadi, Soroosh Nazem, Mohammadhossein Sedaaghi
Abstract:
Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy CMeans (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic CMeans (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.
Keywords: Facial image, segmentation, PCM, FCM, skin error, facial surgery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990298 A Weighted Approach to Unconstrained Iris Recognition
Authors: Yao-Hong Tsai
Abstract:
This paper presents a weighted approach to unconstrained iris recognition. In nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.
Keywords: Authentication, iris recognition, Adaboost, local binary pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937297 Analysis of Feature Space for a 2d/3d Vision based Emotion Recognition Method
Authors: Robert Niese, Ayoub Al-Hamadi, Bernd Michaelis
Abstract:
In modern human computer interaction systems (HCI), emotion recognition is becoming an imperative characteristic. The quest for effective and reliable emotion recognition in HCI has resulted in a need for better face detection, feature extraction and classification. In this paper we present results of feature space analysis after briefly explaining our fully automatic vision based emotion recognition method. We demonstrate the compactness of the feature space and show how the 2d/3d based method achieves superior features for the purpose of emotion classification. Also it is exposed that through feature normalization a widely person independent feature space is created. As a consequence, the classifier architecture has only a minor influence on the classification result. This is particularly elucidated with the help of confusion matrices. For this purpose advanced classification algorithms, such as Support Vector Machines and Artificial Neural Networks are employed, as well as the simple k- Nearest Neighbor classifier.Keywords: Facial expression analysis, Feature extraction, Image processing, Pattern Recognition, Application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923296 Fingerprint Image Encryption Using a 2D Chaotic Map and Elliptic Curve Cryptography
Authors: D. M. S. Bandara, Yunqi Lei, Ye Luo
Abstract:
Fingerprints are suitable as long-term markers of human identity since they provide detailed and unique individual features which are difficult to alter and durable over life time. In this paper, we propose an algorithm to encrypt and decrypt fingerprint images by using a specially designed Elliptic Curve Cryptography (ECC) procedure based on block ciphers. In addition, to increase the confusing effect of fingerprint encryption, we also utilize a chaotic-behaved method called Arnold Cat Map (ACM) for a 2D scrambling of pixel locations in our method. Experimental results are carried out with various types of efficiency and security analyses. As a result, we demonstrate that the proposed fingerprint encryption/decryption algorithm is advantageous in several different aspects including efficiency, security and flexibility. In particular, using this algorithm, we achieve a margin of about 0.1% in the test of Number of Pixel Changing Rate (NPCR) values comparing to the-state-of-the-art performances.Keywords: Arnold cat map, biometric encryption, block cipher, elliptic curve cryptography, fingerprint encryption, Koblitz’s Encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1097295 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.
Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520294 Frame Texture Classification Method (FTCM) Applied on Mammograms for Detection of Abnormalities
Authors: Kjersti Engan, Karl Skretting, Jostein Herredsvela, Thor Ole Gulsrud
Abstract:
Texture classification is an important image processing task with a broad application range. Many different techniques for texture classification have been explored. Using sparse approximation as a feature extraction method for texture classification is a relatively new approach, and Skretting et al. recently presented the Frame Texture Classification Method (FTCM), showing very good results on classical texture images. As an extension of that work the FTCM is here tested on a real world application as detection of abnormalities in mammograms. Some extensions to the original FTCM that are useful in some applications are implemented; two different smoothing techniques and a vector augmentation technique. Both detection of microcalcifications (as a primary detection technique and as a last stage of a detection scheme), and soft tissue lesions in mammograms are explored. All the results are interesting, and especially the results using FTCM on regions of interest as the last stage in a detection scheme for microcalcifications are promising.Keywords: detection, mammogram, texture classification, dictionary learning, FTCM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393293 Effect of Cladding Direction on Residual Stress Distribution in Laser Cladded Rails
Authors: Taposh Roy, Anna Paradowska, Ralph Abrahams, Quan Lai, Michael Law, Peter Mutton, Mehdi Soodi, Wenyi Yan
Abstract:
In this investigation, a laser cladding process with a powder feeding was used to deposit stainless steel 410L (high strength, excellent resistance to abrasion and corrosion, and great laser compatibility) onto railhead (higher strength, heat treated hypereutectoid rail grade manufactured in accordance with the requirements of European standard EN 13674 Part 1 for R400HT grade), to investigate the development and controllability of process-induced residual stress in the cladding, heat-affected zone (HAZ) and substrate and to analyse their correlation with hardness profile during two different laser cladding directions (across and along the track). Residual stresses were analysed by neutron diffraction at OPAL reactor, ANSTO. Neutron diffraction was carried out on the samples in longitudinal (parallel to the rail), transverse (perpendicular to the rail) and normal (through thickness) directions with high spatial resolution through the thickness. Due to the thick rail and thin cladding, 4 mm thick reference samples were prepared from every specimen by Electric Discharge Machining (EDM). Metallography across the laser claded sample revealed four distinct zones: The clad zone, the dilution zone, HAZ and the substrate. Compressive residual stresses were found in the clad zone and tensile residual stress in the dilution zone and HAZ. Laser cladding in longitudinally cladding induced higher tensile stress in the HAZ, whereas transversely cladding rail showed lower tensile behavior.
Keywords: Laser cladding, residual stress, neutron diffraction, HAZ.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1012292 Real Time Speed Estimation of Vehicles
Authors: Azhar Hussain, Kashif Shahzad, Chunming Tang
Abstract:
this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Keywords: Defuzzification, Fuzzy similarity approach, lane cropping, Maximum a Posterior Probability (MAP) estimator, Speed estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2806291 The Temperature Effects on the Microstructure and Profile in Laser Cladding
Authors: P. C. Chiu, Jehnming Lin
Abstract:
In this study, a 50-W CO2 laser was used for the clad of 304L powders on the stainless steel substrate with a temperature sensor and image monitoring system. The laser power and cladding speed and focal position were modified to achieve the requirement of the workpiece flatness and mechanical properties. The numerical calculation is based on ANSYS to analyze the temperature change of the moving heat source at different surface positions when coating the workpiece, and the effect of the process parameters on the bath size was discussed. The temperature of stainless steel powder in the nozzle outlet reacting with the laser was simulated as a process parameter. In the experiment, the difference of the thermal conductivity in three-dimensional space is compared with single-layer cladding and multi-layer cladding. The heat dissipation pattern of the single-layer cladding is the steel plate and the multi-layer coating is the workpiece itself. The relationship between the multi-clad temperature and the profile was analyzed by the temperature signal from an IR pyrometer.Keywords: Laser cladding, temperature, profile, microstructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067290 Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid
Authors: Houda Jalali, Hassan Abbassi
Abstract:
In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.
Keywords: Entropy generation, heat transfer, nanofluid, natural convection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257289 A Cheating Model for Cellular Automata-Based Secret Sharing Schemes
Authors: Borna Jafarpour, Azadeh Nematzadeh, Vahid Kazempour, Babak Sadeghian
Abstract:
Cellular automata have been used for design of cryptosystems. Recently some secret sharing schemes based on linear memory cellular automata have been introduced which are used for both text and image. In this paper, we illustrate that these secret sharing schemes are vulnerable to dishonest participants- collusion. We propose a cheating model for the secret sharing schemes based on linear memory cellular automata. For this purpose we present a novel uniform model for representation of all secret sharing schemes based on cellular automata. Participants can cheat by means of sending bogus shares or bogus transition rules. Cheaters can cooperate to corrupt a shared secret and compute a cheating value added to it. Honest participants are not aware of cheating and suppose the incorrect secret as the valid one. We prove that cheaters can recover valid secret by removing the cheating value form the corrupted secret. We provide methods of calculating the cheating value.
Keywords: Cellular automata, cheating model, secret sharing, threshold scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586