Search results for: brick-infill partition
42 Improved C-Fuzzy Decision Tree for Intrusion Detection
Authors: Krishnamoorthi Makkithaya, N. V. Subba Reddy, U. Dinesh Acharya
Abstract:
As the number of networked computers grows, intrusion detection is an essential component in keeping networks secure. Various approaches for intrusion detection are currently being in use with each one has its own merits and demerits. This paper presents our work to test and improve the performance of a new class of decision tree c-fuzzy decision tree to detect intrusion. The work also includes identifying best candidate feature sub set to build the efficient c-fuzzy decision tree based Intrusion Detection System (IDS). We investigated the usefulness of c-fuzzy decision tree for developing IDS with a data partition based on horizontal fragmentation. Empirical results indicate the usefulness of our approach in developing the efficient IDS.Keywords: Data mining, Decision tree, Feature selection, Fuzzyc- means clustering, Intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157541 System Survivability in Networks in the Context of Defense/Attack Strategies: The Large Scale
Authors: A. Ben Yaghlane, M. N. Azaiez, M. Mrad
Abstract:
We investigate the large scale of networks in the context of network survivability under attack. We use appropriate techniques to evaluate and the attacker-based- and the defenderbased- network survivability. The attacker is unaware of the operated links by the defender. Each attacked link has some pre-specified probability to be disconnected. The defender choice is so that to maximize the chance of successfully sending the flow to the destination node. The attacker however will select the cut-set with the highest chance to be disabled in order to partition the network. Moreover, we extend the problem to the case of selecting the best p paths to operate by the defender and the best k cut-sets to target by the attacker, for arbitrary integers p,k>1. We investigate some variations of the problem and suggest polynomial-time solutions.Keywords: Defense/attack strategies, large scale, networks, partitioning a network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148040 Accelerating Sparse Matrix Vector Multiplication on Many-Core GPUs
Authors: Weizhi Xu, Zhiyong Liu, Dongrui Fan, Shuai Jiao, Xiaochun Ye, Fenglong Song, Chenggang Yan
Abstract:
Many-core GPUs provide high computing ability and substantial bandwidth; however, optimizing irregular applications like SpMV on GPUs becomes a difficult but meaningful task. In this paper, we propose a novel method to improve the performance of SpMV on GPUs. A new storage format called HYB-R is proposed to exploit GPU architecture more efficiently. The COO portion of the matrix is partitioned recursively into a ELL portion and a COO portion in the process of creating HYB-R format to ensure that there are as many non-zeros as possible in ELL format. The method of partitioning the matrix is an important problem for HYB-R kernel, so we also try to tune the parameters to partition the matrix for higher performance. Experimental results show that our method can get better performance than the fastest kernel (HYB) in NVIDIA-s SpMV library with as high as 17% speedup.Keywords: GPU, HYB-R, Many-core, Performance Tuning, SpMV
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198639 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.
Keywords: Mortality map, spatial patterns, statistical area, variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98938 On the Noise Distance in Robust Fuzzy C-Means
Authors: M. G. C. A. Cimino, G. Frosini, B. Lazzerini, F. Marcelloni
Abstract:
In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.Keywords: noise prototype, robust fuzzy clustering, robustfuzzy C-means
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182137 Fuzzy Clustering Analysis in Real Estate Companies in China
Authors: Jianfeng Li, Feng Jin, Xiaoyu Yang
Abstract:
This paper applies fuzzy clustering algorithm in classifying real estate companies in China according to some general financial indexes, such as income per share, share accumulation fund, net profit margins, weighted net assets yield and shareholders' equity. By constructing and normalizing initial partition matrix, getting fuzzy similar matrix with Minkowski metric and gaining the transitive closure, the dynamic fuzzy clustering analysis for real estate companies is shown clearly that different clustered result change gradually with the threshold reducing, and then, it-s shown there is the similar relationship with the prices of those companies in stock market. In this way, it-s great valuable in contrasting the real estate companies- financial condition in order to grasp some good chances of investment, and so on.
Keywords: Fuzzy clustering algorithm, data mining, real estate company, financial analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191636 Clustering of Variables Based On a Probabilistic Approach Defined on the Hypersphere
Authors: Paulo Gomes, Adelaide Figueiredo
Abstract:
We consider n individuals described by p standardized variables, represented by points of the surface of the unit hypersphere Sn-1. For a previous choice of n individuals we suppose that the set of observables variables comes from a mixture of bipolar Watson distribution defined on the hypersphere. EM and Dynamic Clusters algorithms are used for identification of such mixture. We obtain estimates of parameters for each Watson component and then a partition of the set of variables into homogeneous groups of variables. Additionally we will present a factor analysis model where unobservable factors are just the maximum likelihood estimators of Watson directional parameters, exactly the first principal component of data matrix associated to each group previously identified. Such alternative model it will yield us to directly interpretable solutions (simple structure), avoiding factors rotations.
Keywords: Dynamic Clusters algorithm, EM algorithm, Factor analysis model, Hierarchical Clustering, Watson distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162335 Applying Fuzzy FP-Growth to Mine Fuzzy Association Rules
Authors: Chien-Hua Wang, Wei-Hsuan Lee, Chin-Tzong Pang
Abstract:
In data mining, the association rules are used to find for the associations between the different items of the transactions database. As the data collected and stored, rules of value can be found through association rules, which can be applied to help managers execute marketing strategies and establish sound market frameworks. This paper aims to use Fuzzy Frequent Pattern growth (FFP-growth) to derive from fuzzy association rules. At first, we apply fuzzy partition methods and decide a membership function of quantitative value for each transaction item. Next, we implement FFP-growth to deal with the process of data mining. In addition, in order to understand the impact of Apriori algorithm and FFP-growth algorithm on the execution time and the number of generated association rules, the experiment will be performed by using different sizes of databases and thresholds. Lastly, the experiment results show FFPgrowth algorithm is more efficient than other existing methods.Keywords: Data mining, association rule, fuzzy frequent patterngrowth.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179934 Numerical Investigation on the Progressive Collapse Resistance of an RC Building with Brick Infills under Column Loss
Authors: Meng-Hao Tsai, Tsuei-Chiang Huang
Abstract:
Interior brick-infill partitions are usually considered as non-structural components and only their weight is accounted for in practical structural design. In this study, their effect on the progressive collapse resistance of an RC building subjected to sudden column loss is investigated. Three notional column loss conditions with four different brick-infill locations are considered. Column-loss response analyses of the RC building with and without brick infills are carried out. Analysis results indicate that the collapse resistance is only slightly influenced by the brick infills due to their brittle failure characteristic. Even so, they may help to reduce the inelastic displacement response under column loss. For practical engineering, it is reasonably conservative to only consider the weight of brick-infill partitions in the structural analysis.Keywords: Progressive collapse, column loss, brick-infill partition, compression strut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213233 Forecasting US Dollar/Euro Exchange Rate with Genetic Fuzzy Predictor
Authors: R. Mechgoug, A. Titaouine
Abstract:
Fuzzy systems have been successfully used for exchange rate forecasting. However, fuzzy system is very confusing and complex to be designed by an expert, as there is a large set of parameters (fuzzy knowledge base) that must be selected, it is not a simple task to select the appropriate fuzzy knowledge base for an exchange rate forecasting. The researchers often look the effect of fuzzy knowledge base on the performances of fuzzy system forecasting. This paper proposes a genetic fuzzy predictor to forecast the future value of daily US Dollar/Euro exchange rate time’s series. A range of methodologies based on a set of fuzzy predictor’s which allow the forecasting of the same time series, but with a different fuzzy partition. Each fuzzy predictor is built from two stages, where each stage is performed by a real genetic algorithm.
Keywords: Foreign exchange rate, time series forecasting, Fuzzy System, and Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199632 Modeling of Cross Flow Classifier with Water Injection
Authors: E. Pikushchak, J. Dueck, L. Minkov
Abstract:
In hydrocyclones, the particle separation efficiency is limited by the suspended fine particles, which are discharged with the coarse product in the underflow. It is well known that injecting water in the conical part of the cyclone reduces the fine particle fraction in the underflow. This paper presents a mathematical model that simulates the water injection in the conical component. The model accounts for the fluid flow and the particle motion. Particle interaction, due to hindered settling caused by increased density and viscosity of the suspension, and fine particle entrainment by settling coarse particles are included in the model. Water injection in the conical part of the hydrocyclone is performed to reduce fine particle discharge in the underflow. The model demonstrates the impact of the injection rate, injection velocity, and injection location on the shape of the partition curve. The simulations are compared with experimental data of a 50-mm cyclone.Keywords: Classification, fine particle processing, hydrocyclone, water injection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195331 A Novel Microarray Biclustering Algorithm
Authors: Chieh-Yuan Tsai, Chuang-Cheng Chiu
Abstract:
Biclustering aims at identifying several biclusters that reveal potential local patterns from a microarray matrix. A bicluster is a sub-matrix of the microarray consisting of only a subset of genes co-regulates in a subset of conditions. In this study, we extend the motif of subspace clustering to present a K-biclusters clustering (KBC) algorithm for the microarray biclustering issue. Besides minimizing the dissimilarities between genes and bicluster centers within all biclusters, the objective function of the KBC algorithm additionally takes into account how to minimize the residues within all biclusters based on the mean square residue model. In addition, the objective function also maximizes the entropy of conditions to stimulate more conditions to contribute the identification of biclusters. The KBC algorithm adopts the K-means type clustering process to efficiently make the partition of K biclusters be optimized. A set of experiments on a practical microarray dataset are demonstrated to show the performance of the proposed KBC algorithm.Keywords: Microarray, Biclustering, Subspace clustering, Meansquare residue model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161430 Applying Clustering of Hierarchical K-means-like Algorithm on Arabic Language
Authors: Sameh H. Ghwanmeh
Abstract:
In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.
Keywords: Hierarchical K-mean like clustering (HKM), Kmeans, cluster centroids, initial partition, and document distances
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 257029 Generation of Photo-Mosaic Images through Block Matching and Color Adjustment
Authors: Hae-Yeoun Lee
Abstract:
Mosaic refers to a technique that makes image by gathering lots of small materials in various colors. This paper presents an automatic algorithm that makes the photo-mosaic image using photos. The algorithm is composed of 4 steps: partition and feature extraction, block matching, redundancy removal and color adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.
Keywords: Photo-mosaic, Euclidean distance, Block matching, Intensity adjustment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 357028 Subcritical Water Extraction of Mannitol from Olive Leaves
Authors: S. M. Ghoreishi, R. Gholami Shahrestani, S. H. Ghaziaskar
Abstract:
Subcritical water extraction was investigated as a novel and alternative technology in the food and pharmaceutical industry for the separation of Mannitol from olive leaves and its results was compared with those of Soxhlet extraction. The effects of temperature, pressure, and flow rate of water and also momentum and mass transfer dimensionless variables such as Reynolds and Peclet Numbers on extraction yield and equilibrium partition coefficient were investigated. The 30-110 bars, 60-150°C, and flow rates of 0.2-2 mL/min were the water operating conditions. The results revealed that the highest Mannitol yield was obtained at 100°C and 50 bars. However, extraction of Mannitol was not influenced by the variations of flow rate. The mathematical modeling of experimental measurements was also investigated and the model is capable of predicting the experimental measurements very well. In addition, the results indicated higher extraction yield for the subcritical water extraction in contrast to Soxhlet method.Keywords: Extraction, Mannitol, Modeling, Olive leaves, Soxhlet extraction, Subcritical water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 306427 A Decision Boundary based Discretization Technique using Resampling
Authors: Taimur Qureshi, Djamel A Zighed
Abstract:
Many supervised induction algorithms require discrete data, even while real data often comes in a discrete and continuous formats. Quality discretization of continuous attributes is an important problem that has effects on speed, accuracy and understandability of the induction models. Usually, discretization and other types of statistical processes are applied to subsets of the population as the entire population is practically inaccessible. For this reason we argue that the discretization performed on a sample of the population is only an estimate of the entire population. Most of the existing discretization methods, partition the attribute range into two or several intervals using a single or a set of cut points. In this paper, we introduce a technique by using resampling (such as bootstrap) to generate a set of candidate discretization points and thus, improving the discretization quality by providing a better estimation towards the entire population. Thus, the goal of this paper is to observe whether the resampling technique can lead to better discretization points, which opens up a new paradigm to construction of soft decision trees.Keywords: Bootstrap, discretization, resampling, soft decision trees.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143326 Analysis of Diverse Cluster Ensemble Techniques
Authors: S. Sarumathi, N. Shanthi, P. Ranjetha
Abstract:
Data mining is the procedure of determining interesting patterns from the huge amount of data. With the intention of accessing the data faster the most supporting processes needed is clustering. Clustering is the process of identifying similarity between data according to the individuality present in the data and grouping associated data objects into clusters. Cluster ensemble is the technique to combine various runs of different clustering algorithms to obtain a general partition of the original dataset, aiming for consolidation of outcomes from a collection of individual clustering outcomes. The performances of clustering ensembles are mainly affecting by two principal factors such as diversity and quality. This paper presents the overview about the different cluster ensemble algorithm along with their methods used in cluster ensemble to improve the diversity and quality in the several cluster ensemble related papers and shows the comparative analysis of different cluster ensemble also summarize various cluster ensemble methods. Henceforth this clear analysis will be very useful for the world of clustering experts and also helps in deciding the most appropriate one to determine the problem in hand.Keywords: Cluster Ensemble, Consensus Function, CSPA, Diversity, HGPA, MCLA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184025 N-Sun Decomposition of Complete Graphs and Complete Bipartite Graphs
Authors: R. Anitha, R. S. Lekshmi
Abstract:
Graph decompositions are vital in the study of combinatorial design theory. Given two graphs G and H, an H-decomposition of G is a partition of the edge set of G into disjoint isomorphic copies of H. An n-sun is a cycle Cn with an edge terminating in a vertex of degree one attached to each vertex. In this paper we have proved that the complete graph of order 2n, K2n can be decomposed into n-2 n-suns, a Hamilton cycle and a perfect matching, when n is even and for odd case, the decomposition is n-1 n-suns and a perfect matching. For an odd order complete graph K2n+1, delete the star subgraph K1, 2n and the resultant graph K2n is decomposed as in the case of even order. The method of building n-suns uses Walecki's construction for the Hamilton decomposition of complete graphs. A spanning tree decomposition of even order complete graphs is also discussed using the labeling scheme of n-sun decomposition. A complete bipartite graph Kn, n can be decomposed into n/2 n-suns when n/2 is even. When n/2 is odd, Kn, n can be decomposed into (n-2)/2 n-suns and a Hamilton cycle.Keywords: Hamilton cycle, n-sun decomposition, perfectmatching, spanning tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 226124 A Probability based Pair Extension Method in Protein 2-DE Gel Image Analysis
Authors: Yanhua Jin, Won Suk Lee
Abstract:
The two-dimensional gel electrophoresis method (2-DE) is widely used in Proteomics to separate thousands of proteins in a sample. By comparing the protein expression levels of proteins in a normal sample with those in a diseased one, it is possible to identify a meaningful set of marker proteins for the targeted disease. The major shortcomings of this approach involve inherent noises and irregular geometric distortions of spots observed in 2-DE images. Various experimental conditions can be the major causes of these problems. In the protein analysis of samples, these problems eventually lead to incorrect conclusions. In order to minimize the influence of these problems, this paper proposes a partition based pair extension method that performs spot-matching on a set of gel images multiple times and segregates more reliable mapping results which can improve the accuracy of gel image analysis. The improved accuracy of the proposed method is analyzed through various experiments on real 2-DE images of human liver tissues.Keywords: Proteomics, spot-matching, two-dimensionalelectrophoresis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148623 Research on the Transformation of Bottom Space in the Teaching Area of Zijingang Campus, Zhejiang University
Authors: Jia Xu
Abstract:
There is a lot of bottom space in the teaching area of Zijingang Campus of Zhejiang University, which benefits to the ventilation, heat dissipation, circulation, partition of quiet and noisy areas and diversification of spaces. Hangzhou is hot in summer but cold in winter, so teachers and students spend much less time in the bottom space of buildings in winter than in summer. Recently, depending on the teachers and students’ proposals, the school transformed the bottom space in the teaching area to provide space for relaxing, chatting and staying in winter. Surveying and analyzing the existing ways to transform, the paper researches deeply on the transformation projects of bottom space in the teaching buildings. It is believed that this paper can be a salutary lesson to make the bottom space in the teaching areas of universities richer and bring more diverse activities for teachers and students.
Keywords: Bottom space, teaching area, transformation, Zijingang Campus of Zhejiang University.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73122 Quantum Statistical Mechanical Formulations of Three-Body Problems via Non-Local Potentials
Authors: A. Maghari, V. H. Maleki
Abstract:
In this paper, we present a quantum statistical mechanical formulation from our recently analytical expressions for partial-wave transition matrix of a three-particle system. We report the quantum reactive cross sections for three-body scattering processes 1+(2,3)→1+(2,3) as well as recombination 1+(2,3)→1+(3,1) between one atom and a weakly-bound dimer. The analytical expressions of three-particle transition matrices and their corresponding cross-sections were obtained from the threedimensional Faddeev equations subjected to the rank-two non-local separable potentials of the generalized Yamaguchi form. The equilibrium quantum statistical mechanical properties such partition function and equation of state as well as non-equilibrium quantum statistical properties such as transport cross-sections and their corresponding transport collision integrals were formulated analytically. This leads to obtain the transport properties, such as viscosity and diffusion coefficient of a moderate dense gas.Keywords: Statistical mechanics, Nonlocal separable potential, three-body interaction, Faddeev equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211921 Novel Anti-leukemia Calanone Compounds by Quantitative Structure-Activity Relationship AM1 Semiempirical Method
Authors: Ponco Iswanto, Mochammad Chasani, Muhammad Hanafi, Iqmal Tahir, Eva Vaulina YD, Harjono, Lestari Solikhati, Winkanda S. Putra, Yayuk Yuliantini
Abstract:
Quantitative Structure-Activity Relationship (QSAR) approach for discovering novel more active Calanone derivative as anti-leukemia compound has been conducted. There are 6 experimental activities of Calanone compounds against leukemia cell L1210 that are used as material of the research. Calculation of theoretical predictors (independent variables) was performed by AM1 semiempirical method. The QSAR equation is determined by Principle Component Regression (PCR) analysis, with Log IC50 as dependent variable and the independent variables are atomic net charges, dipole moment (μ), and coefficient partition of noctanol/ water (Log P). Three novel Calanone derivatives that obtained by this research have higher activity against leukemia cell L1210 than pure Calanone.Keywords: AM1 semiempirical calculation, Calanone, Principle Component Regression, QSAR approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147620 Minimizing Mutant Sets by Equivalence and Subsumption
Authors: Samia Alblwi, Amani Ayad
Abstract:
Mutation testing is the art of generating syntactic variations of a base program and checking whether a candidate test suite can identify all the mutants that are not semantically equivalent to the base; this technique can be used to assess the quality of test suite. One of the main obstacles to the widespread use of mutation testing is cost, as even small programs (a few dozen lines of code) can give rise to a large number of mutants (up to hundreds); this has created an incentive to seek to reduce the number of mutants while preserving their collective effectiveness. Two criteria have been used to reduce the size of mutant sets: equivalence, which aims to partition the set of mutants into equivalence classes modulo semantic equivalence, and selecting one representative per class; and, subsumption, which aims to define a partial ordering among mutants that ranks mutants by effectiveness and seeks to select maximal elements in this ordering. In this paper, we analyze these two policies using analytical and empirical criteria.
Keywords: Mutation testing, mutant sets, mutant equivalence, mutant subsumption, mutant set minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19119 Face Texture Reconstruction for Illumination Variant Face Recognition
Authors: Pengfei Xiong, Lei Huang, Changping Liu
Abstract:
In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.Keywords: texture reconstruction, illumination, face recognition, subspaces
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148118 An Analysis of the Social Network Structure of Knowledge Management Students at NTU
Authors: Guo Yanru, Zhu Xiaobo, Lee Chu Keong
Abstract:
This paper maps the structure of the social network of the 2011 class ofsixty graduate students of the Masters of Science (Knowledge Management) programme at the Nanyang Technological University, based on their friending relationships on Facebook. To ensure anonymity, actual names were not used. Instead, they were replaced with codes constructed from their gender, nationality, mode of study, year of enrollment and a unique number. The relationships between friends within the class, and among the seniors and alumni of the programme wereplotted. UCINet and Pajek were used to plot the sociogram, to compute the density, inclusivity, and degree, global, betweenness, and Bonacich centralities, to partition the students into two groups, namely, active and peripheral, and to identify the cut-points. Homophily was investigated, and it was observed for nationality and study mode. The groups students formed on Facebook were also studied, and of fifteen groups, eight were classified as dead, which we defined as those that have been inactive for over two months.Keywords: Facebook, friending relationships, Social network analysis, social network sites, structural position
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174417 A Reconfigurable Distributed Multiagent System Optimized for Scalability
Authors: Summiya Moheuddin, Afzel Noore, Muhammad Choudhry
Abstract:
This paper proposes a novel solution for optimizing the size and communication overhead of a distributed multiagent system without compromising the performance. The proposed approach addresses the challenges of scalability especially when the multiagent system is large. A modified spectral clustering technique is used to partition a large network into logically related clusters. Agents are assigned to monitor dedicated clusters rather than monitor each device or node. The proposed scalable multiagent system is implemented using JADE (Java Agent Development Environment) for a large power system. The performance of the proposed topologyindependent decentralized multiagent system and the scalable multiagent system is compared by comprehensively simulating different fault scenarios. The time taken for reconfiguration, the overall computational complexity, and the communication overhead incurred are computed. The results of these simulations show that the proposed scalable multiagent system uses fewer agents efficiently, makes faster decisions to reconfigure when a fault occurs, and incurs significantly less communication overhead.Keywords: Multiagent system, scalable design, spectral clustering, reconfiguration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138016 Covering-based Rough sets Based on the Refinement of Covering-element
Authors: Jianguo Tang, Kun She, William Zhu
Abstract:
Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.Keywords: Determinate element, indeterminate element, refinementof covering-element, refinement of covering, covering-basedrough sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132115 Effect of Interior Brick-infill Partitions on the Progressive Collapse Potential of a RC Building: Linear Static Analysis Results
Authors: Meng-Hao Tsai, Tsuei-Chiang Huang
Abstract:
Interior brick-infill partitions are usually considered as non-structural components, and only their weight is accounted for in practical structural design. In this study, the brick-infill panels are simulated by compression struts to clarify their effect on the progressive collapse potential of an earthquake-resistant RC building. Three-dimensional finite element models are constructed for the RC building subjected to sudden column loss. Linear static analyses are conducted to investigate the variation of demand-to-capacity ratio (DCR) of beam-end moment and the axial force variation of the beams adjacent to the removed column. Study results indicate that the brick-infill effect depends on their location with respect to the removed column. As they are filled in a structural bay with a shorter span adjacent to the column-removed line, more significant reduction of DCR may be achieved. However, under certain conditions, the brick infill may increase the axial tension of the two-span beam bridging the removed column.Keywords: Progressive collapse, brick-infill partition, compression strut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221314 A Comprehensive Review on Different Mixed Data Clustering Ensemble Methods
Authors: S. Sarumathi, N. Shanthi, S. Vidhya, M. Sharmila
Abstract:
An extensive amount of work has been done in data clustering research under the unsupervised learning technique in Data Mining during the past two decades. Moreover, several approaches and methods have been emerged focusing on clustering diverse data types, features of cluster models and similarity rates of clusters. However, none of the single clustering algorithm exemplifies its best nature in extracting efficient clusters. Consequently, in order to rectify this issue, a new challenging technique called Cluster Ensemble method was bloomed. This new approach tends to be the alternative method for the cluster analysis problem. The main objective of the Cluster Ensemble is to aggregate the diverse clustering solutions in such a way to attain accuracy and also to improve the eminence the individual clustering algorithms. Due to the massive and rapid development of new methods in the globe of data mining, it is highly mandatory to scrutinize a vital analysis of existing techniques and the future novelty. This paper shows the comparative analysis of different cluster ensemble methods along with their methodologies and salient features. Henceforth this unambiguous analysis will be very useful for the society of clustering experts and also helps in deciding the most appropriate one to resolve the problem in hand.
Keywords: Clustering, Cluster Ensemble Methods, Coassociation matrix, Consensus Function, Median Partition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210313 Double Diffusive Convection in a Partially Porous Cavity under Suction/Injection Effects
Authors: Y. Outaleb, K. Bouhadef, O. Rahli
Abstract:
Double-diffusive steady convection in a partially porous cavity with partially permeable walls and under the combined buoyancy effects of thermal and mass diffusion was analysed numerically using finite volume method. The top wall is well insulated and impermeable while the bottom surface is partially well insulated and impermeable and partially submitted to constant temperature T1 and concentration C1. Constant equal temperature T2 and concentration C2 are imposed along the vertical surfaces of the enclosure. Mass suction/injection and injection/suction are respectively considered at the bottom of the porous centred partition and at one of the vertical walls. Heat and mass transfer characteristics as streamlines and average Nusselt numbers and Sherwood numbers were discussed for different values of buoyancy ratio, Rayleigh number, and injection/suction coefficient. It is especially noted that increasing the injection factor disadvantages the exchanges in the case of the injection while the transfer is augmented in case of suction. On the other hand, a critical value of the buoyancy ratio was highlighted for which heat and mass transfers are minimized.Keywords: Double diffusive convection, Injection/Extraction, Partially porous cavity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561