Search results for: the Gradient projection for sparse reconstruction.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 655

Search results for: the Gradient projection for sparse reconstruction.

325 The Nature of the Complicated Fabric Textures: How to Represent in Primary Visual Cortex

Authors: J. L. Liu, L. Wang, B. Zhu, J. Zhou, W. D. Gao

Abstract:

Fabric textures are very common in our daily life. However, the representation of fabric textures has never been explored from neuroscience view. Theoretical studies suggest that primary visual cortex (V1) uses a sparse code to efficiently represent natural images. However, how the simple cells in V1 encode the artificial textures is still a mystery. So, here we will take fabric texture as stimulus to study the response of independent component analysis that is established to model the receptive field of simple cells in V1. We choose 140 types of fabrics to get the classical fabric textures as materials. Experiment results indicate that the receptive fields of simple cells have obvious selectivity in orientation, frequency and phase when drifting gratings are used to determine their tuning properties. Additionally, the distribution of optimal orientation and frequency shows that the patch size selected from each original fabric image has a significant effect on the frequency selectivity.

Keywords: Fabric Texture, Receptive Filed, Simple Cell, Spare Coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
324 Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter

Authors: Young-Seok Choi

Abstract:

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.

Keywords: Regularization, normalized LMS, system identification, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
323 A Nonlinear ODE System for the Unsteady Hydrodynamic Force – A New Approach

Authors: Osama A. Marzouk

Abstract:

We propose a reduced-ordermodel for the instantaneous hydrodynamic force on a cylinder. The model consists of a system of two ordinary differential equations (ODEs), which can be integrated in time to yield very accurate histories of the resultant force and its direction. In contrast to several existing models, the proposed model considers the actual (total) hydrodynamic force rather than its perpendicular or parallel projection (the lift and drag), and captures the complete force rather than the oscillatory part only. We study and provide descriptions of the relationship between the model parameters, evaluated utilizing results from numerical simulations, and the Reynolds number so that the model can be used at any arbitrary value within the considered range of 100 to 500 to provide accurate representation of the force without the need to perform timeconsuming simulations and solving the partial differential equations (PDEs) governing the flow field.

Keywords: reduced-order model, wake oscillator, nonlinear, ODEsystem

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514
322 Current Starved Ring Oscillator Image Sensor

Authors: Devin Atkin, Orly Yadid-Pecht

Abstract:

The continual demands for increasing resolution and dynamic range in complimentary metal-oxide semiconductor (CMOS) image sensors have resulted in exponential increases in the amount of data that need to be read out of an image sensor, and existing readouts cannot keep up with this demand. Interesting approaches such as sparse and burst readouts have been proposed and show promise, but at considerable trade-offs in other specifications. To this end, we have begun designing and evaluating various readout topologies centered around an attempt to parallelize the sensor readout. In this paper, we have designed, simulated, and started testing a light-controlled oscillator topology with dual column and row readouts. We expect the parallel readout structure to offer greater speed and alleviate the trade-off typical in this topology, where slow pixels present a major framerate bottleneck.

Keywords: CMOS image sensors, high-speed capture, wide dynamic range, light controlled oscillator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 86
321 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil

Abstract:

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Keywords: Gradient descent method, jacobian matrix.Levenberg-Marquardt algorithm, quadratic error surfaces,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
320 Eye Location Based on Structure Feature for Driver Fatigue Monitoring

Authors: Qiong Wang

Abstract:

One of the most important problems to solve is eye location for a driver fatigue monitoring system. This paper presents an efficient method to achieve fast and accurate eye location in grey level images obtained in the real-word driving conditions. The structure of eye region is used as a robust cue to find possible eye pairs. Candidates of eye pair at different scales are selected by finding regions which roughly match with the binary eye pair template. To obtain real one, all the eye pair candidates are then verified by using support vector machines. Finally, eyes are precisely located by using binary vertical projection and eye classifier in eye pair images. The proposed method is robust to deal with illumination changes, moderate rotations, glasses wearing and different eye states. Experimental results demonstrate its effectiveness.

Keywords: eye location, structure feature, driver fatiguemonitoring

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
319 Signal Reconstruction Using Cepstrum of Higher Order Statistics

Authors: Adnan Al-Smadi, Mahmoud Smadi

Abstract:

This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.

Keywords: Cepstrum, bicepstrum, third order statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991
318 Effect of a Magnetic Field on the Onset of Marangoni Convection in a Micropolar Fluid

Authors: Mohd Nasir Mahmud, Ruwaidiah Idris, Ishak Hashim

Abstract:

With the presence of a uniform vertical magnetic field and suspended particles, thermocapillary instability in a horizontal liquid layer is investigated. The resulting eigenvalue is solved by the Galerkin technique for various basic temperature gradients. It is found that the presence of magnetic field always has a stability effect of increasing the critical Marangoni number.

Keywords: Marangoni convection, Magnetic field, Micropolar fluid, Non-uniform thermal gradient, Thermocapillary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
317 An Iterative Algorithm for KLDA Classifier

Authors: D.N. Zheng, J.X. Wang, Y.N. Zhao, Z.H. Yang

Abstract:

The Linear discriminant analysis (LDA) can be generalized into a nonlinear form - kernel LDA (KLDA) expediently by using the kernel functions. But KLDA is often referred to a general eigenvalue problem in singular case. To avoid this complication, this paper proposes an iterative algorithm for the two-class KLDA. The proposed KLDA is used as a nonlinear discriminant classifier, and the experiments show that it has a comparable performance with SVM.

Keywords: Linear discriminant analysis (LDA), kernel LDA (KLDA), conjugate gradient algorithm, nonlinear discriminant classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
316 Linguistic Competence Analysis and the Development of Speaking Instructional Material

Authors: Felipa M. Rico

Abstract:

Linguistic oral competence plays a vital role in attaining effective communication. Since the English language is considered as universally used language and has a high demand skill needed in the work-place, mastery is the expected output from learners. To achieve this, learners should be given integrated differentiated tasks which help them develop and strengthen the expected skills. This study aimed to develop speaking instructional supplementary material to enhance the English linguistic competence of Grade 9 students in areas of pronunciation, intonation and stress, voice projection, diction and fluency. A descriptive analysis was utilized to analyze the speaking level of performance of the students in order to employ appropriate strategies. There were two sets of respondents: 178 Grade 9 students selected through a stratified sampling and chosen at random. The other set comprised English teachers who evaluated the usefulness of the devised teaching materials. A teacher conducted a speaking test and activities were employed to analyze the speaking needs of students. Observation and recordings were also used to evaluate the students’ performance. The findings revealed that the English pronunciation of the students was slightly unclear at times, but generally fair. There were lapses but generally they rated moderate in intonation and stress, because of other language interference. In terms of voice projection, students have erratic high volume pitch. For diction, the students’ ability to produce comprehensible language is limited, and as to fluency, the choice of vocabulary and use of structure were severely limited. Based on the students’ speaking needs analyses, the supplementary material devised was based on Nunan’s IM model, incorporating context of daily life and global work settings, considering the principle that language is best learned in the actual meaningful situation. To widen the mastery of skill, a rich learning environment, filled with a variety instructional material tends to foster faster acquisition of the requisite skills for sustained learning and development. The role of IM is to encourage information to stick in the learners’ mind, as what is seen is understood more than what is heard. Teachers say they found the IM “very useful.” This implied that English teachers could adopt the materials to improve the speaking skills of students. Further, teachers should provide varied opportunities for students to get involved in real life situations where they could take turns in asking and answering questions and share information related to the activities. This would minimize anxiety among students in the use of the English language.

Keywords: Fluency, intonation, instructional materials, linguistic competence, pronunciation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
315 The Riemann Barycenter Computation and Means of Several Matrices

Authors: Miklos Palfia

Abstract:

An iterative definition of any n variable mean function is given in this article, which iteratively uses the two-variable form of the corresponding two-variable mean function. This extension method omits recursivity which is an important improvement compared with certain recursive formulas given before by Ando-Li-Mathias, Petz- Temesi. Furthermore it is conjectured here that this iterative algorithm coincides with the solution of the Riemann centroid minimization problem. Certain simulations are given here to compare the convergence rate of the different algorithms given in the literature. These algorithms will be the gradient and the Newton mehod for the Riemann centroid computation.

Keywords: Means, matrix means, operator means, geometric mean, Riemannian center of mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
314 Numerical Studies of Galerkin-type Time-discretizations Applied to Transient Convection-diffusion-reaction Equations

Authors: Naveed Ahmed, Gunar Matthies

Abstract:

We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.

Keywords: Convection-diffusion-reaction equations, stabilized finite elements, discontinuous Galerkin, continuous Galerkin-Petrov.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
313 Iterative solutions to the linear matrix equation AXB + CXTD = E

Authors: Yongxin Yuan, Jiashang Jiang

Abstract:

In this paper the gradient based iterative algorithm is presented to solve the linear matrix equation AXB +CXTD = E, where X is unknown matrix, A,B,C,D,E are the given constant matrices. It is proved that if the equation has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. Two numerical examples show that the introduced iterative algorithm is quite efficient.

Keywords: matrix equation, iterative algorithm, parameter estimation, minimum norm solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514
312 Evaluating some Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Features selection, learning with kernels, support vector machine, genetic algorithms and classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
311 Coastal Vulnerability Index and Its Projection for Odisha Coast, East Coast of India

Authors: Bishnupriya Sahoo, Prasad K. Bhaskaran

Abstract:

Tropical cyclone is one among the worst natural hazards that results in a trail of destruction causing enormous damage to life, property, and coastal infrastructures. In a global perspective, the Indian Ocean is considered as one of the cyclone prone basins in the world. Specifically, the frequency of cyclogenesis in the Bay of Bengal is higher compared to the Arabian Sea. Out of the four maritime states in the East coast of India, Odisha is highly susceptible to tropical cyclone landfall. Historical records clearly decipher the fact that the frequency of cyclones have reduced in this basin. However, in the recent decades, the intensity and size of tropical cyclones have increased. This is a matter of concern as the risk and vulnerability level of Odisha coast exposed to high wind speed and gusts during cyclone landfall have increased. In this context, there is a need to assess and evaluate the severity of coastal risk, area of exposure under risk, and associated vulnerability with a higher dimension in a multi-risk perspective. Changing climate can result in the emergence of a new hazard and vulnerability over a region with differential spatial and socio-economic impact. Hence there is a need to have coastal vulnerability projections in a changing climate scenario. With this motivation, the present study attempts to estimate the destructiveness of tropical cyclones based on Power Dissipation Index (PDI) for those cyclones that made landfall along Odisha coast that exhibits an increasing trend based on historical data. The study also covers the futuristic scenarios of integral coastal vulnerability based on the trends in PDI for the Odisha coast. This study considers 11 essential and important parameters; the cyclone intensity, storm surge, onshore inundation, mean tidal range, continental shelf slope, topo-graphic elevation onshore, rate of shoreline change, maximum wave height, relative sea level rise, rainfall distribution, and coastal geomorphology. The study signifies that over a decadal scale, the coastal vulnerability index (CVI) depends largely on the incremental change in variables such as cyclone intensity, storm surge, and associated inundation. In addition, the study also performs a critical analysis on the modulation of PDI on storm surge and inundation characteristics for the entire coastal belt of Odisha State. Interestingly, the study brings to light that a linear correlation exists between the storm-tide with PDI. The trend analysis of PDI and its projection for coastal Odisha have direct practical applications in effective coastal zone management and vulnerability assessment.

Keywords: Bay of Bengal, coastal vulnerability index, power dissipation index, tropical cyclone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
310 Margin-Based Feed-Forward Neural Network Classifiers

Authors: Han Xiao, Xiaoyan Zhu

Abstract:

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.

Keywords: Max-Margin Principle, Feed-Forward Neural Network, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
309 Frame Texture Classification Method (FTCM) Applied on Mammograms for Detection of Abnormalities

Authors: Kjersti Engan, Karl Skretting, Jostein Herredsvela, Thor Ole Gulsrud

Abstract:

Texture classification is an important image processing task with a broad application range. Many different techniques for texture classification have been explored. Using sparse approximation as a feature extraction method for texture classification is a relatively new approach, and Skretting et al. recently presented the Frame Texture Classification Method (FTCM), showing very good results on classical texture images. As an extension of that work the FTCM is here tested on a real world application as detection of abnormalities in mammograms. Some extensions to the original FTCM that are useful in some applications are implemented; two different smoothing techniques and a vector augmentation technique. Both detection of microcalcifications (as a primary detection technique and as a last stage of a detection scheme), and soft tissue lesions in mammograms are explored. All the results are interesting, and especially the results using FTCM on regions of interest as the last stage in a detection scheme for microcalcifications are promising.

Keywords: detection, mammogram, texture classification, dictionary learning, FTCM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344
308 Rigid Registration of Reduced Dimension Images using 1D Binary Projections

Authors: Panos D. Kotsas, Tony Dodd

Abstract:

The purpose of this work is to present a method for rigid registration of medical images using 1D binary projections when a part of one of the two images is missing. We use 1D binary projections and we adjust the projection limits according to the reduced image in order to perform accurate registration. We use the variance of the weighted ratio as a registration function which we have shown is able to register 2D and 3D images more accurately and robustly than mutual information methods. The function is computed explicitly for n=5 Chebyshev points in a [-9,+9] interval and it is approximated using Chebyshev polynomials for all other points. The images used are MR scans of the head. We find that the method is able to register the two images with average accuracy 0.3degrees for rotations and 0.2 pixels for translations for a y dimension of 156 with initial dimension 256. For y dimension 128/256 the accuracy decreases to 0.7 degrees for rotations and 0.6 pixels for translations.

Keywords: binary projections, image registration, reduceddimension images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
307 Analysis of the Black Sea Gas Hydrates

Authors: Sukru Merey, Caglar Sinayuc

Abstract:

Gas hydrate deposits which are found in deep ocean sediments and in permafrost regions are supposed to be a fossil fuel reserve for the future. The Black Sea is also considered rich in terms of gas hydrates. It abundantly contains gas hydrates as methane (CH4~80 to 99.9%) source. In this study, by using the literature, seismic and other data of the Black Sea such as salinity, porosity of the sediments, common gas type, temperature distribution and pressure gradient, the optimum gas production method for the Black Sea gas hydrates was selected as mainly depressurization method. Numerical simulations were run to analyze gas production from gas hydrate deposited in turbidites in the Black Sea by depressurization.

Keywords: Black Sea hydrates, depressurization, turbidites, HydrateResSim.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1317
306 Characterization of Solutions of Nonsmooth Variational Problems and Duality

Authors: Juan Zhang, Changzhao Li

Abstract:

In this paper, we introduce a new class of nonsmooth pseudo-invex and nonsmooth quasi-invex functions to non-smooth variational problems. By using these concepts, numbers of necessary and sufficient conditions are established for a nonsmooth variational problem wherein Clarke’s generalized gradient is used. Also, weak, strong and converse duality are established.

Keywords: Variational problem, Nonsmooth pseudo-invex, Nonsmooth quasi-invex, Critical point, Duality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1096
305 On Reversal and Transposition Medians

Authors: Martin Bader

Abstract:

During the last years, the genomes of more and more species have been sequenced, providing data for phylogenetic recon- struction based on genome rearrangement measures. A main task in all phylogenetic reconstruction algorithms is to solve the median of three problem. Although this problem is NP-hard even for the sim- plest distance measures, there are exact algorithms for the breakpoint median and the reversal median that are fast enough for practical use. In this paper, this approach is extended to the transposition median as well as to the weighted reversal and transposition median. Although there is no exact polynomial algorithm known even for the pairwise distances, we will show that it is in most cases possible to solve these problems exactly within reasonable time by using a branch and bound algorithm.

Keywords: Comparative genomics, genome rearrangements, me-dian, reversals, transpositions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
304 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective

Authors: Kalangi Rodrigo, Kelum Manamendra-Arachchi

Abstract:

Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods. Sri Lanka has been known as an Island, which has a diverse variety of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated 'on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearthed from wet zone prehistoric caves, it has been argued periods of momentous climate changes during the Last Glacial Maximum (LGM) and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 °C over the last 50,000 years. Site catchment or territorial analysis cannot be any longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequence of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expended to energy produced.

Keywords: Palaeo-environment, palaeo-ecology, palaeo-climate, prehistory, zooarchaeology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
303 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality

Authors: Sang-Chul Kim, Sung-Il Chien

Abstract:

The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.

Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
302 Land Use Change Detection Using Remote Sensing and GIS

Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi

Abstract:

In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.

Keywords: HARAZ Basin, Change Detection, Land-use, Satellite Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267
301 Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation

Authors: Abdolvahab Ehsani Rad, Mohd Shafry Mohd Rahim, Alireza Norouzi

Abstract:

Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.

Keywords: Integral production, level set method, morphological operation, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4152
300 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619
299 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: Least squares, neighbor joining, phylogenetic tree, wild dogpack.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
298 Feature Selection with Kohonen Self Organizing Classification Algorithm

Authors: Francesco Maiorana

Abstract:

In this paper a one-dimension Self Organizing Map algorithm (SOM) to perform feature selection is presented. The algorithm is based on a first classification of the input dataset on a similarity space. From this classification for each class a set of positive and negative features is computed. This set of features is selected as result of the procedure. The procedure is evaluated on an in-house dataset from a Knowledge Discovery from Text (KDT) application and on a set of publicly available datasets used in international feature selection competitions. These datasets come from KDT applications, drug discovery as well as other applications. The knowledge of the correct classification available for the training and validation datasets is used to optimize the parameters for positive and negative feature extractions. The process becomes feasible for large and sparse datasets, as the ones obtained in KDT applications, by using both compression techniques to store the similarity matrix and speed up techniques of the Kohonen algorithm that take advantage of the sparsity of the input matrix. These improvements make it feasible, by using the grid, the application of the methodology to massive datasets.

Keywords: Clustering algorithm, Data mining, Feature selection, Grid, Kohonen Self Organizing Map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3003
297 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, Neural networks, Local cost computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246
296 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit

Authors: Ahmed Elrewainy

Abstract:

Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.

Keywords: Basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 798