Search results for: pixel normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 392

Search results for: pixel normalization

182 Water Detection in Aerial Images Using Fuzzy Sets

Authors: Caio Marcelo Nunes, Anderson da Silva Soares, Gustavo Teodoro Laureano, Clarimar Jose Coelho

Abstract:

This paper presents a methodology to pixel recognition in aerial images using fuzzy $c$-means algorithm. This algorithm is a alternative to recognize areas considering uncertainties and inaccuracies. Traditional clustering technics are used in recognizing of multispectral images of earth's surface. This technics recognize well-defined borders that can be easily discretized. However, in the real world there are many areas with uncertainties and inaccuracies which can be mapped by clustering algorithms that use fuzzy sets. The methodology presents in this work is applied to multispectral images obtained from Landsat-5/TM satellite. The pixels are joined using the $c$-means algorithm. After, a classification process identify the types of surface according the patterns obtained from spectral response of image surface. The classes considered are, exposed soil, moist soil, vegetation, turbid water and clean water. The results obtained shows that the fuzzy clustering identify the real type of the earth's surface.

Keywords: aerial images, fuzzy clustering, image processing, pattern recognition

Procedia PDF Downloads 433
181 Extraction of Compound Words in Malay Sentences Using Linguistic and Statistical Approaches

Authors: Zamri Abu Bakar Zamri, Normaly Kamal Ismail Normaly, Mohd Izani Mohamed Rawi Izani

Abstract:

Malay noun compound are phrases that consist of two or more nouns. The key characteristic behind noun compounds lies on its frequent occurrences within the text. Therefore, extracting these noun compounds is essential for several domains of research such as Information Retrieval, Sentiment Analysis and Question Answering. Many research efforts have been proposed in terms of extracting Malay noun compounds using linguistic and statistical approaches. Most of the existing methods have concentrated on the extraction of bi-gram noun+noun compound. However, extracting noun+verb, noun+adjective and noun+prepositional is challenging due to the difficulty of selecting an appropriate method with effective results. Thus, there is still room for improvement in terms of enhancing the effectiveness of compound word extraction. Therefore, this study proposed a combination of linguistic approach and statistical measures in order to enhance the extraction of compound words. Several preprocessing steps are involved including normalization, tokenization, and stemming. The linguistic approach that has been used in this study is Part-of-Speech (POS) tagging. In addition, a new linguistic pattern for named entities has been utilized using a list of Malays named entities in order to enhance the linguistic approach in terms of noun compound recognition. The proposed statistical measures consists of NC-value, NTC-value and NLC value.

Keywords: Compound Word, Noun Compound, Linguistic Approach, Statistical Approach

Procedia PDF Downloads 312
180 Land Cover Classification Using Sentinel-2 Image Data and Random Forest Algorithm

Authors: Thanh Noi Phan, Martin Kappas, Jan Degener

Abstract:

The currently launched Sentinel 2 (S2) satellite (June, 2015) bring a great potential and opportunities for land use/cover map applications, due to its fine spatial resolution multispectral as well as high temporal resolutions. So far, there are handful studies using S2 real data for land cover classification. Especially in northern Vietnam, to our best knowledge, there exist no studies using S2 data for land cover map application. The aim of this study is to provide the preliminary result of land cover classification using Sentinel -2 data with a rising state – of – art classifier, Random Forest. A case study with heterogeneous land use/cover in the eastern of Hanoi Capital – Vietnam was chosen for this study. All 10 spectral bands of 10 and 20 m pixel size of S2 images were used, the 10 m bands were resampled to 20 m. Among several classified algorithms, supervised Random Forest classifier (RF) was applied because it was reported as one of the most accuracy methods of satellite image classification. The results showed that the red-edge and shortwave infrared (SWIR) bands play an important role in land cover classified results. A very high overall accuracy above 90% of classification results was achieved.

Keywords: classify algorithm, classification, land cover, random forest, sentinel 2, Vietnam

Procedia PDF Downloads 342
179 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 33
178 Hyperspectral Image Classification Using Tree Search Algorithm

Authors: Shreya Pare, Parvin Akhter

Abstract:

Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.

Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm

Procedia PDF Downloads 132
177 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 51
176 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 100
175 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies

Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi

Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)

Procedia PDF Downloads 172
174 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion

Authors: Adnan A. Y. Mustafa

Abstract:

Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.

Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping

Procedia PDF Downloads 113
173 The Use of the Flat Field Panel for the On-Ground Calibration of Metis Coronagraph on Board of Solar Orbiter

Authors: C. Casini, V. Da Deppo, P. Zuppella, P. Chioetto, A. Slemer, F. Frassetto, M. Romoli, F. Landini, M. Pancrazzi, V. Andretta, E. Antonucci, A. Bemporad, M. Casti, Y. De Leo, M. Fabi, S. Fineschi, F. Frassati, C. Grimani, G. Jerse, P. Heinzel, K. Heerlein, A. Liberatore, E. Magli, G. Naletto, G. Nicolini, M.G. Pelizzo, P. Romano, C. Sasso, D. Spadaro, M. Stangalini, T. Straus, R. Susino, L. Teriaca, M. Uslenghi, A. Volpicelli

Abstract:

Solar Orbiter, launched on February 9th 2020, is an ESA/NASA mission conceived to study the Sun. The payload is composed of 10 instruments, among which there is the Metis coronagraph. A coronagraph aims at taking images of the solar corona: the occulter element simulates a total solar eclipse. This work presents some of the results obtained in the visible light band (580-640 nm) using a flat field panel source. The flat field panel gives a uniform illumination; consequently, it has been used during the on-ground calibration for several purposes: evaluating the response of each pixel of the detector (linearity); and characterizing the Field of View of the coronagraph. As a conclusion, a major result is the verification that the requirement for the Field of View (FoV) of Metis is fulfilled. Some investigations are in progress in order to verify that the performance measured on-ground did not change after launch.

Keywords: solar orbiter, Metis, coronagraph, flat field panel, calibration, on-ground, performance

Procedia PDF Downloads 72
172 Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation

Authors: Salim Çalışkan, Hakan Akyüz

Abstract:

Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset.

Keywords: digital image correlation, speckle pattern, experimental mechanics, tensile test, aluminum alloy

Procedia PDF Downloads 36
171 The Employees' Classification Method in the Space of Their Job Satisfaction, Loyalty and Involvement

Authors: Svetlana Ignatjeva, Jelena Slesareva

Abstract:

The aim of the study is development and adaptation of the method to analyze and quantify the indicators characterizing the relationship between a company and its employees. Diagnostics of such indicators is one of the most complex and actual issues in psychology of labour. The offered method is based on the questionnaire; its indicators reflect cognitive, affective and connotative components of socio-psychological attitude of employees to be as efficient as possible in their professional activities. This approach allows measure not only the selected factors but also such parameters as cognitive and behavioural dissonances. Adaptation of the questionnaire includes factor structure analysis and suitability analysis of phenomena indicators measured in terms of internal consistency of individual factors. Structural validity of the questionnaire was tested by exploratory factor analysis. Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Factor analysis allows reduce dimension of the phenomena moving from the indicators to aggregative indexes and latent variables. Aggregative indexes are obtained as the sum of relevant indicators followed by standardization. The coefficient Cronbach's Alpha was used to assess the reliability-consistency of the questionnaire items. The two-step cluster analysis in the space of allocated factors allows classify employees according to their attitude to work in the company. The results of psychometric testing indicate possibility of using the developed technique for the analysis of employees’ attitude towards their work in companies and development of recommendations on their optimization.

Keywords: involved in the organization, loyalty, organizations, method

Procedia PDF Downloads 321
170 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, Bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 411
169 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 230
168 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 386
167 Deep Neural Networks for Restoration of Sky Images Affected by Static and Anisotropic Aberrations

Authors: Constanza A. Barriga, Rafael Bernardi, Amokrane Berdja, Christian D. Guzman

Abstract:

Most image restoration methods in astronomy rely upon probabilistic tools that infer the best solution for a deconvolution problem. They achieve good performances when the point spread function (PSF) is spatially invariable in the image plane. However, this latter condition is not always satisfied with real optical systems. PSF angular variations cannot be evaluated directly from the observations, neither be corrected at a pixel resolution. We have developed a method for the restoration of images affected by static and anisotropic aberrations using deep neural networks that can be directly applied to sky images. The network is trained using simulated sky images corresponding to the T-80 telescope optical system, an 80 cm survey imager at Cerro Tololo (Chile), which are synthesized using a Zernike polynomial representation of the optical system. Once trained, the network can be used directly on sky images, outputting a corrected version of the image, which has a constant and known PSF across its field-of-view. The method was tested with the T-80 telescope, achieving better results than with PSF deconvolution techniques. We present the method and results on this telescope.

Keywords: aberrations, deep neural networks, image restoration, variable point spread function, wide field images

Procedia PDF Downloads 100
166 Administration of Lactobacillus plantarum PS128 Improves Animal Behavior and Monoamine Neurotransmission in Germ-Free Mice

Authors: Liu Wei-Hsien, Chuang Hsiao-Li, Huang Yen-Te, Wu Chien-Chen, Chou Geng-Ting, Tsai Ying-Chieh

Abstract:

Intestinal microflora play an important role in communication along the gut-brain axis. Probiotics, defined as live bacteria or bacterial products, confer a significant health benefit to the host. Here we administered Lactobacillus plantarum PS128 (PS128) to the germ-free (GF) mouse to investigate the impact of the gut-brain axis on emotional behavior. Administration of live PS128 significantly increased the total distance traveled in the open field test; it decreased the time spent in the closed arm and increased the time spent and total entries into the open arm in the elevated plus maze. In contrast, heat-killed PS128 caused no significant changes in the GF mice. Treatment with live PS128 significantly increased levels of both serotonin and dopamine in the striatum, but not in the prefrontal cortex or hippocampus. However, live PS128 did not alter pro- or anti-inflammatory cytokine production by mitogen-stimulated splenocytes. The above data indicate that the normalization of emotional behavior correlated with monoamine neurotransmission, but not with immune activity. Our findings suggest that daily intake of the probiotic PS128 could ameliorate neuropsychiatric disorders such as anxiety and excessive psychological stress.

Keywords: dopamine, hypothalamic-pituitary-adrenal axis, intestinal microflora, serotonin

Procedia PDF Downloads 376
165 Change Detection of Vegetative Areas Using Land Use Land Cover Derived from NDVI of Desert Encroached Areas

Authors: T. Garba, T. O. Quddus, Y. Y. Babanyara, M. A. Modibbo

Abstract:

Desertification is define as the changing of productive land into a desert as the result of ruination of land by man-induced soil erosion, which forces famers in the affected areas to move migrate or encourage into reserved areas in search of a fertile land for their farming activities. This study therefore used remote sensing imageries to determine the level of changes in the vegetative areas. To achieve that Normalized Difference of the Vegetative Index (NDVI), classified imageries and image slicing derived from landsat TM 1986, land sat ETM 1999 and Nigeria sat 1 2007 were used to determine changes in vegetations. From the Classified imageries it was discovered that there a more natural vegetation in classified images of 1986 than that of 1999 and 2007. This finding is also future in the three NDVI imageries, it was discovered that there is increased in high positive pixel value from 0.04 in 1986 to 0.22 in 1999 and to 0.32 in 2007. The figures in the three histogram also indicted that there is increased in vegetative areas from 29.15 Km2 in 1986, to 60.58 Km2 in 1999 and then to 109 Km2 in 2007. The study recommends among other things that there is need to restore natural vegetation through discouraging of farming activities in and around the natural vegetation in the study area.

Keywords: vegetative index, classified imageries, change detection, landsat, vegetation

Procedia PDF Downloads 321
164 External Retinal Prosthesis Image Processing System Used One-Cue Saliency Map Based on DSP

Authors: Yili Chen, Jixiang Fu, Zhihua Liu, Zhicheng Zhang, Rongmao Li, Nan Fu, Yaoqin Xie

Abstract:

Retinal prothesis is designed to help the blind to get some sight.It is made up of internal part and external part.In external part ,there is made up of camera, image processing, and RF transmitter.In internal part, there is RF receiver, implant chip,micro-electrode.The image got from the camera should be processed by suitable stragies to corresponds to stimulus the electrode.Nowadays, the number of the micro-electrode is hundreds and we don’t know the mechanism how the elctrode stimulus the optic nerve, an easy way to the hypothesis is that the pixel in the image is correspondence to the electrode.So it is a question how to get the important information of the image captured from the picture.There are many strategies to experimented to get the most important information as soon as possible, due to the real time system.ROI is a useful algorithem to extract the region of the interest.Our paper will explain the details of the orinciples and functions of the ROI.And based on this, we simplified the ROI algrithem,and used it in outside image prcessing DSP system of the retinal prothesis.Results show that our image processing stratiges is suitable for real-time retinal prothesis and can cut redundant information and help useful information to express in the low-size image.

Keywords: image processing, region of interest, saliency map, low-size image, useful information express, cut redundant information in image

Procedia PDF Downloads 242
163 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 133
162 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia

Authors: Halefom Kidane

Abstract:

This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.

Keywords: artificial neural networks, forecasting, min-max normalization, wind speed

Procedia PDF Downloads 29
161 Refined Edge Detection Network

Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni

Abstract:

Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.

Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone

Procedia PDF Downloads 60
160 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)

Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves

Abstract:

The modeling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high-resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve denser and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high-resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.

Keywords: 3D models, environment, matching, pleiades

Procedia PDF Downloads 282
159 Understanding Gender-Based Violence through an Adolescent Lens: Qualitative Findings from Delhi, India

Authors: Pratishtha Singh

Abstract:

Gender-based violence (GBV) or gendered violence refers to violence inflicted on a person because of their gender. Majority of men who perpetrate gender-based violence, first do so during their teenage years. Further, the first sexual experience of most girls is coerced. In order to reduce the widespread occurrence of GBV, it is vital to intervene and reach people, especially boys, when their attitudes and beliefs about sexuality and gender are developing. This study aims to understand GBV through an adolescent lens, focusing on their knowledge, attitudes and experiences regarding gendered abuse. This is a cross-sectional, qualitative study. The respondents are Delhi based students in grades 11th and 12th, recruited via snowball sampling. Sixteen in-depth, telephonic interviews were carried out in the month of April, 2020. The data was transcribed verbatim into MS Word and qualitative coding was undertaken in Atlas.ti 8. Twelve out of sixteen respondents admitted experiencing sexual GBV. Out of these, a little more than half of the victims reported it to somebody. Thematic analysis revealed key themes of: (i) Introduction and reinforcement of a patriarchal structure (ii) Violence in teen dating (iii) Acceptability and normalization of violence and (iv) Justice System. Findings reflect a process wherein GBV becomes an intricate part of adolescents’ lives. Participants showed a moderately well-informed understanding of gendered abuse whereas attitudes reflected a complex combination of internalized patriarchy and a desire to bring positive societal reform. The results of this study highlight a need for health promoting, gender-equitable interventions.

Keywords: adolescents, gender, health, violence

Procedia PDF Downloads 98
158 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 135
157 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection

Authors: Hussin K. Ragb, Vijayan K. Asari

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor

Procedia PDF Downloads 449
156 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking

Authors: Peter U. Eze, P. Udaya, Robin J. Evans

Abstract:

Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.

Keywords: Constant Correlation, Medical Image, Spread Spectrum, Tamper Detection, Watermarking

Procedia PDF Downloads 159
155 Life Cycle Assessment Comparison between Methanol and Ethanol Feedstock for the Biodiesel from Soybean Oil

Authors: Pawit Tangviroon, Apichit Svang-Ariyaskul

Abstract:

As the limited availability of petroleum-based fuel has been a major concern, biodiesel is one of the most attractive alternative fuels because it is renewable and it also has advantages over the conventional petroleum-base diesel. At Present, productions of biodiesel generally perform by transesterification of vegetable oils with low molecular weight alcohol, mainly methanol, using chemical catalysts. Methanol is petrochemical product that makes biodiesel producing from methanol to be not pure renewable energy source. Therefore, ethanol as a product produced by fermentation processes. It appears as a potential feed stock that makes biodiesel to be pure renewable alternative fuel. The research is conducted based on two biodiesel production processes by reacting soybean oils with methanol and ethanol. Life cycle assessment was carried out in order to evaluate the environmental impacts and to identify the process alternative. Nine mid-point impact categories are investigated. The results indicate that better performance on Abiotic Depletion Potential (ADP) and Acidification Potential (AP) are observed in biodiesel production from methanol when compared with biodiesel production from ethanol due to less energy consumption during the production processes. Except for ADP and AP, using methanol as feed stock does not show any advantages over biodiesel from ethanol. The single score method is also included in this study in order to identify the best option between two processes of biodiesel production. The global normalization and weighting factor based on eco-taxes are used and it shows that producing biodiesel form ethanol has less environmental load compare to biodiesel from methanol.

Keywords: biodiesel, ethanol, life cycle assessment, methanol, soybean oil

Procedia PDF Downloads 179
154 High Resolution Image Generation Algorithm for Archaeology Drawings

Authors: Xiaolin Zeng, Lei Cheng

Abstract:

Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on the conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy(PA), structural similarity(SSIM), and peak signal-to-noise ratio(PSNR) and can be used to assist in drawing archaeology drawings.

Keywords: archaeology drawings, digital heritage, image generation, deep learning

Procedia PDF Downloads 10
153 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas

Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi

Abstract:

In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.

Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies

Procedia PDF Downloads 331