Search results for: weighted dissimilarity measure
1180 Pattern Recognition Techniques Applied to Biomedical Patterns
Authors: Giovanni Luca Masala
Abstract:
Pattern recognition is the research area of Artificial Intelligence that studies the operation and design of systems that recognize patterns in the data. Important application areas are image analysis, character recognition, fingerprint classification, speech analysis, DNA sequence identification, man and machine diagnostics, person identification and industrial inspection. The interest in improving the classification systems of data analysis is independent from the context of applications. In fact, in many studies it is often the case to have to recognize and to distinguish groups of various objects, which requires the need for valid instruments capable to perform this task. The objective of this article is to show several methodologies of Artificial Intelligence for data classification applied to biomedical patterns. In particular, this work deals with the realization of a Computer-Aided Detection system (CADe) that is able to assist the radiologist in identifying types of mammary tumor lesions. As an additional biomedical application of the classification systems, we present a study conducted on blood samples which shows how these methods may help to distinguish between carriers of Thalassemia (or Mediterranean Anaemia) and healthy subjects.
Keywords: Computer Aided Detection, mammary tumor, pattern recognition, dissimilarity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23591179 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction
Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz
Abstract:
This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19051178 Using Suffix Tree Document Representation in Hierarchical Agglomerative Clustering
Authors: Daniel I. Morariu, Radu G. Cretulescu, Lucian N. Vintan
Abstract:
In text categorization problem the most used method for documents representation is based on words frequency vectors called VSM (Vector Space Model). This representation is based only on words from documents and in this case loses any “word context" information found in the document. In this article we make a comparison between the classical method of document representation and a method called Suffix Tree Document Model (STDM) that is based on representing documents in the Suffix Tree format. For the STDM model we proposed a new approach for documents representation and a new formula for computing the similarity between two documents. Thus we propose to build the suffix tree only for any two documents at a time. This approach is faster, it has lower memory consumption and use entire document representation without using methods for disposing nodes. Also for this method is proposed a formula for computing the similarity between documents, which improves substantially the clustering quality. This representation method was validated using HAC - Hierarchical Agglomerative Clustering. In this context we experiment also the stemming influence in the document preprocessing step and highlight the difference between similarity or dissimilarity measures to find “closer" documents.Keywords: Text Clustering, Suffix tree documentrepresentation, Hierarchical Agglomerative Clustering
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19091177 Accuracy of Divergence Measures for Detection of Abrupt Changes
Authors: P. Bergl
Abstract:
Numerous divergence measures (spectral distance, cepstral distance, difference of the cepstral coefficients, Kullback-Leibler divergence, distance given by the General Likelihood Ratio, distance defined by the Recursive Bayesian Changepoint Detector and the Mahalanobis measure) are compared in this study. The measures are used for detection of abrupt spectral changes in synthetic AR signals via the sliding window algorithm. Two experiments are performed; the first is focused on detection of single boundary while the second concentrates on detection of a couple of boundaries. Accuracy of detection is judged for each method; the measures are compared according to results of both experiments.Keywords: Abrupt changes detection, autoregressive model, divergence measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14481176 Similarity Measures and Weighted Fuzzy C-Mean Clustering Algorithm
Authors: Bainian Li, Kongsheng Zhang, Jian Xu
Abstract:
In this paper we study the fuzzy c-mean clustering algorithm combined with principal components method. Demonstratively analysis indicate that the new clustering method is well rather than some clustering algorithms. We also consider the validity of clustering method.
Keywords: FCM algorithm, Principal Components Analysis, Clustervalidity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17231175 Analysis of Six Sigma in the Aerospace Industry
Authors: Masimuddin Mohd Khaled
Abstract:
This paper subsidizes to the discussion of Six Sigma in the Aerospace Industry. The main aim of this report is to study the literature review of Six Sigma emphasizing on the aerospace industry. The implementation of Six Sigma stages are studied and how the improvement cycle ‘Define, Measure, Analyze, Improve and Control cycle’ (DMAIC) and the design process is ‘Define, Measure, Analyze, Design and Verify cycle’ (DMADV) is used. The focus is also done by studying how the implementation of Six Sigma on an aerospace company has brought a positive effect to the company.
Keywords: Six Sigma, DMAIC, DMADV, aerospace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73601174 Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models
Authors: A. Shebani, C. Pislaru
Abstract:
The wear measuring and wear modelling are fundamental issues in the industrial field, mainly correlated to the economy and safety. Therefore, there is a need to study the wear measurements and wear estimation. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin is made of steel with a tip, positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument. The Talysurf profilometer was used to measure the pin/disc wear scar depth, digital microscope was used to measure the diameter and width of wear scar, and the alicona was used to measure the pin wear and disc wear. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling. Simulation results were implemented by using the Matlab program. This paper focuses on how the alicona can be used for wear measurements and how the neural network can be used for wear estimation.
Keywords: Wear measuring, Wear modelling, Neural Network, Alicona.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42781173 Theoretical Exploration for the Impact of Accounting for Special Methods in Connectivity-Based Cohesion Measurement
Authors: Jehad Al Dallal
Abstract:
Class cohesion is a key object-oriented software quality attribute that is used to evaluate the degree of relatedness of class attributes and methods. Researchers have proposed several class cohesion measures. However, the effect of considering the special methods (i.e., constructors, destructors, and access and delegation methods) in cohesion calculation is not thoroughly theoretically studied for most of them. In this paper, we address this issue for three popular connectivity-based class cohesion measures. For each of the considered measures we theoretically study the impact of including or excluding special methods on the values that are obtained by applying the measure. This study is based on analyzing the definitions and formulas that are proposed for the measures. The results show that including/excluding special methods has a considerable effect on the obtained cohesion values and that this effect varies from one measure to another. For each of the three connectivity-based measures, the proposed theoretical study recommended excluding the special methods in cohesion measurement.
Keywords: Object-oriented class, software quality, class cohesion measure, class cohesion, special methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16961172 Unsupervised Image Segmentation Based on Fuzzy Connectedness with Sale Space Theory
Authors: Yuanjie Zheng, Jie Yang, Yue Zhou
Abstract:
In this paper, we propose an approach of unsupervised segmentation with fuzzy connectedness. Valid seeds are first specified by an unsupervised method based on scale space theory. A region is then extracted for each seed with a relative object extraction method of fuzzy connectedness. Afterwards, regions are merged according to the values between them of an introduced measure. Some theorems and propositions are also provided to show the reasonableness of the measure for doing mergence. Experiment results on a synthetic image, a color image and a large amount of MR images of our method are reported.Keywords: Image segmentation, unsupervised imagesegmentation, fuzzy connectedness, scale space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13401171 Undecimated Wavelet Transform Based Contrast Enhancement
Authors: Numan Unaldi, Samil Temel, Süleyman Demirci
Abstract:
A novel undecimated wavelet transform based contrast enhancement algorithmis proposed to for both gray scale andcolor images. Contrast enhancement is realized by tuning the magnitude of approximation coefficients at each level with respect to the approximation coefficients of one higher level during the inverse transform phase in a center/surround enhancement sense.The performance of the proposed algorithm is evaluated using a statistical visual contrast measure (VCM). Experimental results on the proposed algorithm show improvement in terms of the VCM.
Keywords: Image enhancement, local contrast enhancement, visual contrast measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27461170 Measurements of MRI R2* Relaxation Rate in Liver and Muscle: Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study was aimed to measure effective transverse relaxation rates (R2*) in the liver and muscle of normal New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the constituents of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterwards, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) to acquire images for R2* measurements. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(sı) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 ± 10.9 s-1 and 37.4 ± 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, the more the iron contents in tissue, the higher the R2*. The correlations between R2* and iron content in NZW rabbits might be valuable for further exploration.Keywords: Liver, MRI, multi-planar fast gradient echo, muscle, R2* relaxation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21501169 Measurement of Intellectual Capital in an Algerian Company
Authors: S. Brahmi, S. Aitouche, M. D. Mouss
Abstract:
Every modern company should measure the value of its intellectual capital and to report to complement the traditional annual balance sheets. The purpose of this work is to measure the intellectual capital in an Algerian company (or production system) using the Weightless Wealth Tool Kit (WWTK). The results of the measurement of intellectual capital are supplemented by traditional financial ratios. The measurement was applied to the National Company of Wells Services (ENSP) in Hassi Messaoud city, in the south of Algeria. We calculated the intellectual capital (intangible resources) of the ENSP to help the organization to better capitalize on its potential of workers and their know-how. The intangible value of the ENSP is evaluated at 16,936,173,345 DA in 2015.
Keywords: Financial valuation, intangible capital, intellectual capital, intellectual capital measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11521168 A Review on Important Aspects of Information Retrieval
Authors: Yogesh Gupta, Ashish Saini, A.K. Saxena
Abstract:
Information retrieval has become an important field of study and research under computer science due to explosive growth of information available in the form of full text, hypertext, administrative text, directory, numeric or bibliographic text. The research work is going on various aspects of information retrieval systems so as to improve its efficiency and reliability. This paper presents a comprehensive study, which discusses not only emergence and evolution of information retrieval but also includes different information retrieval models and some important aspects such as document representation, similarity measure and query expansion.
Keywords: Information Retrieval, query expansion, similarity measure, query expansion, vector space model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33391167 Efficient Frontier - Comparing Different Volatility Estimators
Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković
Abstract:
Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.
Keywords: Variance, lower semi-variance, range-based volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25771166 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication
Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.
Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10691165 Robust Adaptive ELS-QR Algorithm for Linear Discrete Time Stochastic Systems Identification
Authors: Ginalber L. O. Serra
Abstract:
This work proposes a recursive weighted ELS algorithm for system identification by applying numerically robust orthogonal Householder transformations. The properties of the proposed algorithm show it obtains acceptable results in a noisy environment: fast convergence and asymptotically unbiased estimates. Comparative analysis with others robust methods well known from literature are also presented.Keywords: Stochastic Systems, Robust Identification, Parameter Estimation, Systems Identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14901164 Compromise Ratio Method for Decision Making under Fuzzy Environment using Fuzzy Distance Measure
Authors: Debashree Guha, Debjani Chakraborty
Abstract:
The aim of this paper is to adopt a compromise ratio (CR) methodology for fuzzy multi-attribute single-expert decision making proble. In this paper, the rating of each alternative has been described by linguistic terms, which can be expressed as triangular fuzzy numbers. The compromise ratio method for fuzzy multi-attribute single expert decision making has been considered here by taking the ranking index based on the concept that the chosen alternative should be as close as possible to the ideal solution and as far away as possible from the negative-ideal solution simultaneously. From logical point of view, the distance between two triangular fuzzy numbers also is a fuzzy number, not a crisp value. Therefore a fuzzy distance measure, which is itself a fuzzy number, has been used here to calculate the difference between two triangular fuzzy numbers. Now in this paper, with the help of this fuzzy distance measure, it has been shown that the compromise ratio is a fuzzy number and this eases the problem of the decision maker to take the decision. The computation principle and the procedure of the compromise ratio method have been described in detail in this paper. A comparative analysis of the compromise ratio method previously proposed [1] and the newly adopted method have been illustrated with two numerical examples.
Keywords: Compromise ratio method, Fuzzy multi-attributesingle-expert decision making, Fuzzy number, Linguistic variable
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14101163 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
Authors: Adnan A. Y. Mustafa
Abstract:
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.
Keywords: Big images, binary images, similarity, matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9191162 Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering
Authors: Mohamed A. Mahfouz, M. A. Ismail
Abstract:
This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate.Keywords: Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24011161 A New Similarity Measure on Intuitionistic Fuzzy Sets
Authors: Binyamin Yusoff, Imran Taib, Lazim Abdullah, Abd Fatah Wahab
Abstract:
Intuitionistic fuzzy sets as proposed by Atanassov, have gained much attention from past and latter researchers for applications in various fields. Similarity measures between intuitionistic fuzzy sets were developed afterwards. However, it does not cater the conflicting behavior of each element evaluated. We therefore made some modification to the similarity measure of IFS by considering conflicting concept to the model. In this paper, we concentrate on Zhang and Fu-s similarity measures for IFSs and some examples are given to validate these similarity measures. A simple modification to Zhang and Fu-s similarity measures of IFSs was proposed to find the best result according to the use of degree of indeterminacy. Finally, we mark up with the application to real decision making problems.Keywords: Intuitionistic fuzzy sets, similarity measures, multicriteriadecision making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28451160 Some Solid Transportation Models with Crisp and Rough Costs
Authors: Pradip Kundu, Samarjit Kar, Manoranjan Maiti
Abstract:
In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.
Keywords: Solid transportation problem, Rough set, Rough variable, Trust measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26221159 A Note on Penalized Power-Divergence Test Statistics
Authors: Aylin Alin
Abstract:
In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.
Keywords: Contingency table, Log-linear models, Penalization, Power-divergence measure, Penalized power-divergence measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13151158 Assessment of Multiscale Information for Short Physiological Time Series
Authors: Young-Seok Choi
Abstract:
This paper presents a multiscale information measure of Electroencephalogram (EEG) for analysis with a short data length. A multiscale extension of permutation entropy (MPE) is capable of fully reflecting the dynamical characteristics of EEG across different temporal scales. However, MPE yields an imprecise estimation due to coarse-grained procedure at large scales. We present an improved MPE measure to estimate entropy more accurately with a short time series. By computing entropies of all coarse-grained time series and averaging those at each scale, it leads to the modified MPE (MMPE) which provides an enhanced accuracy as compared to MPE. Simulation and experimental studies confirmed that MMPE has proved its capability over MPE in terms of accuracy.Keywords: Multiscale entropy, permutation entropy, EEG, seizure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14781157 Protein-Protein Interaction Detection Based on Substring Sensitivity Measure
Authors: Nazar Zaki, Safaai Deris, Hany Alashwal
Abstract:
Detecting protein-protein interactions is a central problem in computational biology and aberrant such interactions may have implicated in a number of neurological disorders. As a result, the prediction of protein-protein interactions has recently received considerable attention from biologist around the globe. Computational tools that are capable of effectively identifying protein-protein interactions are much needed. In this paper, we propose a method to detect protein-protein interaction based on substring similarity measure. Two protein sequences may interact by the mean of the similarities of the substrings they contain. When applied on the currently available protein-protein interaction data for the yeast Saccharomyces cerevisiae, the proposed method delivered reasonable improvement over the existing ones.
Keywords: Protein-Protein Interaction, support vector machine, feature extraction, pairwise alignment, Smith-Waterman score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19361156 Comparison of Composite Programming and Compromise Programming for Aircraft Selection Problem Using Multiple Criteria Decision Making Analysis Method
Authors: C. Ardil
Abstract:
In this paper, the comparison of composite programming and compromise programming for the aircraft selection problem is discussed using the multiple criteria decision analysis method. The decision making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as aircraft selection problems. The proposed technique gives more efficient results by extending the composite programming and compromise programming, which are widely used in modeling multiple criteria decisions. The proposed model is applied to a practical decision problem for evaluating and selecting aircraft problems.A selection of aircraft was made based on the proposed approach developed in the field of multiple criteria decision making. The model presented is solved by using the following methods: composite programming, and compromise programming. The importance values of the weight coefficients of the criteria are calculated using the mean weight method. The evaluation and ranking of aircraft are carried out using the composite programming and compromise programming methods. In order to determine the stability of the model and the ability to apply the developed composite programming and compromise programming approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ and q in the first part. The second part of the sensitivity analysis relates to the application of different multiple criteria decision making methods, composite programming and compromise programming. In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient of the ranks obtained was calculated which confirms the applicability of all the proposed approaches.
Keywords: composite programming, compromise programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6951155 Exponentially Weighted Simultaneous Estimation of Several Quantiles
Authors: Valeriy Naumov, Olli Martikainen
Abstract:
In this paper we propose new method for simultaneous generating multiple quantiles corresponding to given probability levels from data streams and massive data sets. This method provides a basis for development of single-pass low-storage quantile estimation algorithms, which differ in complexity, storage requirement and accuracy. We demonstrate that such algorithms may perform well even for heavy-tailed data.Keywords: Quantile estimation, data stream, heavy-taileddistribution, tail index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15311154 A Comparative Analysis of E-Government Quality Models
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
Many quality models have been used to measure egovernment portals quality. However, the absence of an international consensus for e-government portals quality models results in many differences in terms of quality attributes and measures. The aim of this paper is to compare and analyze the existing e-government quality models proposed in literature (those that are based on ISO standards and those that are not) in order to propose guidelines to build a good and useful e-government portals quality model. Our findings show that, there is no e-government portal quality model based on the new international standard ISO 25010. Besides that, the quality models are not based on a best practice model to allow agencies to both; measure e-government portals quality and identify missing best practices for those portals.
Keywords: E-government, portal, best practices, quality model, ISO, standard, ISO 25010, ISO 9126.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36491153 Assessing Complexity of Neuronal Multiunit Activity by Information Theoretic Measure
Authors: Young-Seok Choi
Abstract:
This paper provides a quantitative measure of the time-varying multiunit neuronal spiking activity using an entropy based approach. To verify the status embedded in the neuronal activity of a population of neurons, the discrete wavelet transform (DWT) is used to isolate the inherent spiking activity of MUA. Due to the de-correlating property of DWT, the spiking activity would be preserved while reducing the non-spiking component. By evaluating the entropy of the wavelet coefficients of the de-noised MUA, a multiresolution Shannon entropy (MRSE) of the MUA signal is developed. The proposed entropy was tested in the analysis of both simulated noisy MUA and actual MUA recorded from cortex in rodent model. Simulation and experimental results demonstrate that the dynamics of a population can be quantified by using the proposed entropy.
Keywords: Discrete wavelet transform, Entropy, Multiresolution, Multiunit activity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18441152 Extending the Quantum Entropy to Multidimensional Signal Processing
Authors: Youssef Khmou, Said Safi, Miloud Frikel
Abstract:
This paper treats different aspects of entropy measure in classical information theory and statistical quantum mechanics, it presents the possibility of extending the definition of Von Neumann entropy to image and array processing. In the first part, we generalize the quantum entropy using singular values of arbitrary rectangular matrices to measure the randomness and the quality of denoising operation, this new definition of entropy can be implemented to compare the performance analysis of filtering methods. In the second part, we apply the concept of pure state in quantum formalism to generalize the maximum entropy method for narrowband and farfield source localization problem. Several computer simulation results are illustrated to demonstrate the effectiveness of the proposed techniques.Keywords: Von Neumann entropy, Filtering, array, DoA, Maximum Entropy Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25051151 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism
Authors: Jehad Al Dallal
Abstract:
Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447