Search results for: vague sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1303

Search results for: vague sets

1303 Some New Hesitant Fuzzy Sets Operator

Authors: G. S. Thakur

Abstract:

In this paper, four new operators (O1, O2, O3, O4) are proposed, defined and considered to study the new properties and identities on hesitant fuzzy sets. These operators are useful for different operation on hesitant fuzzy sets. The various theorems are proved using the new operators. The study of the proposed new operators has opened a new area of research and applications.

Keywords: vague sets, hesitant fuzzy sets, intuitionistic fuzzy set, fuzzy sets, fuzzy multisets

Procedia PDF Downloads 251
1302 General Network with Four Nodes and Four Activities with Triangular Fuzzy Number as Activity Times

Authors: Rashmi Tamhankar, Madhav Bapat

Abstract:

In many projects, we have to use human judgment for determining the duration of the activities which may vary from person to person. Hence, there is vagueness about the time duration for activities in network planning. Fuzzy sets can handle such vague or imprecise concepts and has an application to such network. The vague activity times can be represented by triangular fuzzy numbers. In this paper, a general network with fuzzy activity times is considered and conditions for the critical path are obtained also we compute total float time of each activity. Several numerical examples are discussed.

Keywords: PERT, CPM, triangular fuzzy numbers, fuzzy activity times

Procedia PDF Downloads 427
1301 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming

Authors: Hadi Gholizadeh, Ali Tajdin

Abstract:

To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.

Keywords: goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression

Procedia PDF Downloads 197
1300 A Study of Closed Sets and Maps with Ideals

Authors: Asha Gupta, Ramandeep Kaur

Abstract:

The purpose of this paper is to study a class of closed sets, called generalized pre-closed sets with respect to an ideal (briefly Igp-closed sets), which is an extension of generalized pre-closed sets in general topology. Then, by using these sets, the concepts of Igp- compact spaces along with some classes of maps like continuous and closed maps via ideals have been introduced and analogues of some known results for compact spaces, continuous maps and closed maps in general topology have been obtained.

Keywords: ideal, gp-closed sets, gp-closed maps, gp-continuous maps

Procedia PDF Downloads 179
1299 Application of Soft Sets to Non-Associative Rings

Authors: Inayatur Rehman

Abstract:

Molodtstove developed the theory of soft sets which can be seen as an effective tool to deal with uncertainties. Since the introduction of this concept, the application of soft sets has been restricted to associative algebraic structures (groups, semi groups, associative rings, semi-rings etc.). Acceptably, though the study of soft sets, where the base set of parameters is a commutative structure, has attracted the attention of many researchers for more than one decade. But on the other hand there are many sets which are naturally endowed by two compatible binary operations forming a non-associative ring and we may dig out examples which investigate a non-associative structure in the context of soft sets. Thus it seems natural to apply the concept of soft sets to non-commutative and non-associative structures. In present paper, we make a new approach to apply Molodtsoves notion of soft sets to LA-ring (a class of non-associative ring). We extend the study of soft commutative rings from theoretical aspect.

Keywords: soft sets, LA-rings, soft LA-rings, soft ideals, soft prime ideals, idealistic soft LA-rings, LA-ring homomorphism

Procedia PDF Downloads 419
1298 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets

Authors: O. Poleshchuk, E. Komarov

Abstract:

This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.

Keywords: interval type-2 fuzzy sets, fuzzy regression, weighted interval

Procedia PDF Downloads 332
1297 Evaluation of Environmental and Social Management System of Green Climate Fund's Accredited Entities: A Qualitative Approach Applied to Environmental and Social System

Authors: Sima Majnooni

Abstract:

This paper discusses the Green Climate Fund's environmental and social management framework (GCF). The environmental and social management framework ensures the accredited entity considers the GCF's accreditation standards and effectively implements each of the GCF-funded projects. The GCF requires all accredited entities to meet basic transparency and accountability standards as well as environmental and social safeguards (ESMS). In doing so, the accredited entity sets up different independent units. One of these units is called the Grievance Mechanism. When allegations of environmental and social harms are raised in association with GCF-funded activities, affected parties can contact the entity’s grievance unit. One of the most challenging things about the accredited entity's grievance unit is the lack of available information and resources on the entities' websites. Many AEs have anti-corruption or anti-money laundering unit, but they do not have the environmental and social unit for affected people. This paper will argue the effectiveness of environmental and social grievance mechanisms of AEs by using a qualitative approach to indicate how many of AEs have a poor or an effective GRM. Some ESMSs seem highly effective. On the other hand, other mechanisms lack basic requirements such as a clear, transparent, uniform procedure and a definitive timetable. We have looked at each AE mechanism not only in light of how the website goes into detail regarding the process of grievance mechanism but also in light of their risk category. Many mechanisms appear inadequate for the lower level risk category entities (C) and, even surprisingly, for many higher-risk categories (A). We found; in most cases, the grievance mechanism of AEs seems vague.

Keywords: grievance mechanism, vague environmental and social policies, green climate fund, international climate finance, lower and higher risk category

Procedia PDF Downloads 97
1296 A Note on the Fractal Dimension of Mandelbrot Set and Julia Sets in Misiurewicz Points

Authors: O. Boussoufi, K. Lamrini Uahabi, M. Atounti

Abstract:

The main purpose of this paper is to calculate the fractal dimension of some Julia Sets and Mandelbrot Set in the Misiurewicz Points. Using Matlab to generate the Julia Sets images that match the Misiurewicz points and using a Fractal software, we were able to find different measures that characterize those fractals in textures and other features. We are actually focusing on fractal dimension and the error calculated by the software. When executing the given equation of regression or the log-log slope of image a Box Counting method is applied to the entire image, and chosen settings are available in a FracLAc Program. Finally, a comparison is done for each image corresponding to the area (boundary) where Misiurewicz Point is located.

Keywords: box counting, FracLac, fractal dimension, Julia Sets, Mandelbrot Set, Misiurewicz Points

Procedia PDF Downloads 178
1295 Efficient Recommendation System for Frequent and High Utility Itemsets over Incremental Datasets

Authors: J. K. Kavitha, D. Manjula, U. Kanimozhi

Abstract:

Mining frequent and high utility item sets have gained much significance in the recent years. When the data arrives sporadically, incremental and interactive rule mining and utility mining approaches can be adopted to handle user’s dynamic environmental needs and avoid redundancies, using previous data structures, and mining results. The dependence on recommendation systems has exponentially risen since the advent of search engines. This paper proposes a model for building a recommendation system that suggests frequent and high utility item sets over dynamic datasets for a cluster based location prediction strategy to predict user’s trajectories using the Efficient Incremental Rule Mining (EIRM) algorithm and the Fast Update Utility Pattern Tree (FUUP) algorithm. Through comprehensive evaluations by experiments, this scheme has shown to deliver excellent performance.

Keywords: data sets, recommendation system, utility item sets, frequent item sets mining

Procedia PDF Downloads 266
1294 Building 1-Well-Covered Graphs by Corona, Join, and Rooted Product of Graphs

Authors: Vadim E. Levit, Eugen Mandrescu

Abstract:

A graph is well-covered if all its maximal independent sets are of the same size. A well-covered graph is 1-well-covered if deletion of every vertex of the graph leaves it well-covered. It is known that a graph without isolated vertices is 1-well-covered if and only if every two disjoint independent sets are included in two disjoint maximum independent sets. Well-covered graphs are related to combinatorial commutative algebra (e.g., every Cohen-Macaulay graph is well-covered, while each Gorenstein graph without isolated vertices is 1-well-covered). Our intent is to construct several infinite families of 1-well-covered graphs using the following known graph operations: corona, join, and rooted product of graphs. Adopting some known techniques used to advantage for well-covered graphs, one can prove that: if the graph G has no isolated vertices, then the corona of G and H is 1-well-covered if and only if H is a complete graph of order two at least; the join of the graphs G and H is 1-well-covered if and only if G and H have the same independence number and both are 1-well-covered; if H satisfies the property that every three pairwise disjoint independent sets are included in three pairwise disjoint maximum independent sets, then the rooted product of G and H is 1-well-covered, for every graph G. These findings show not only how to generate some more families of 1-well-covered graphs, but also that, to this aim, sometimes, one may use graphs that are not necessarily 1-well-covered.

Keywords: maximum independent set, corona, concatenation, join, well-covered graph

Procedia PDF Downloads 171
1293 LEGO Bricks and Creativity: A Comparison between Classic and Single Sets

Authors: Maheen Zia

Abstract:

Near the early twenty-first century, LEGO decided to diversify its product range which resulted in more specific and single-outcome sets occupying the store shelves than classic kits having fairly all-purpose bricks. Earlier, LEGOs came with more bricks and lesser instructions. Today, there are more single kits being produced and sold, which come with a strictly defined set of guidelines. If one set is used to make a car, the same bricks cannot be put together to produce any other article. Earlier, multiple bricks gave children a chance to be imaginative, think of new items and construct them (by just putting the same pieces differently). The new products are less open-ended and offer a limited possibility for players in both designing and realizing those designs. The article reviews (in the light of existing research) how classic LEGO sets could help enhance a child’s creativity in comparison with single sets, which allow a player to interact (not experiment) with the bricks.

Keywords: constructive play, creativity, LEGO, play-based learning

Procedia PDF Downloads 162
1292 Imputation Technique for Feature Selection in Microarray Data Set

Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam

Abstract:

Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.

Keywords: DNA microarray, feature selection, missing data, bioinformatics

Procedia PDF Downloads 530
1291 Multi-Objective Production Planning Problem: A Case Study of Certain and Uncertain Environment

Authors: Ahteshamul Haq, Srikant Gupta, Murshid Kamal, Irfan Ali

Abstract:

This case study designs and builds a multi-objective production planning model for a hardware firm with certain & uncertain data. During the time of interaction with the manager of the firm, they indicate some of the parameters may be vague. This vagueness in the formulated model is handled by the concept of fuzzy set theory. Triangular & Trapezoidal fuzzy numbers are used to represent the uncertainty in the collected data. The fuzzy nature is de-fuzzified into the crisp form using well-known defuzzification method via graded mean integration representation method. The proposed model attempts to maximize the production of the firm, profit related to the manufactured items & minimize the carrying inventory costs in both certain & uncertain environment. The recommended optimal plan is determined via fuzzy programming approach, and the formulated models are solved by using optimizing software LINGO 16.0 for getting the optimal production plan. The proposed model yields an efficient compromise solution with the overall satisfaction of decision maker.

Keywords: production planning problem, multi-objective optimization, fuzzy programming, fuzzy sets

Procedia PDF Downloads 174
1290 The Analysis of Split Graphs in Social Networks Based on the k-Cardinality Assignment Problem

Authors: Ivan Belik

Abstract:

In terms of social networks split graphs correspond to the variety of interpersonal and intergroup relations. In this paper we analyse the interaction between the cliques (socially strong and trusty groups) and the independent sets (fragmented and non-connected groups of people) as the basic components of any split graph. Based on the Semi-Lagrangean relaxation for the k-cardinality assignment problem we show the way of how to minimize the socially risky interactions between the cliques and the independent sets within the social network.

Keywords: cliques, independent sets, k-cardinality assignment, social networks, split graphs

Procedia PDF Downloads 283
1289 A Method for Quantitative Assessment of the Dependencies between Input Signals and Output Indicators in Production Systems

Authors: Maciej Zaręba, Sławomir Lasota

Abstract:

Knowing the degree of dependencies between the sets of input signals and selected sets of indicators that measure a production system's effectiveness is of great importance in the industry. This paper introduces the SELM method that enables the selection of sets of input signals, which affects the most the selected subset of indicators that measures the effectiveness of a production system. For defined set of output indicators, the method quantifies the impact of input signals that are gathered in the continuous monitoring production system.

Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems

Procedia PDF Downloads 83
1288 Corpus-Based Analysis on the Translatability of Conceptual Vagueness in Traditional Chinese Medicine Classics Huang Di Nei Jing

Authors: Yan Yue

Abstract:

Huang Di Nei Jing (HDNJ) is one of the significant traditional Chinese medicine (TCM) classics which lays the foundation of TCM theory and practice. It is an important work for the world to study the ancient civilizations and medical history of China. Language in HDNJ is highly concise and vague, and notably challenging to translate. This paper investigates the translatability of one particular vagueness in HDNJ: the conceptual vagueness which carries the Chinese philosophical and cultural connotations. The corpora tool Sketch Engine is used to provide potential online contexts and word behaviors. Selected two English translations of HDNJ by TCM practitioner and non-practitioner are used to examine frequency and distribution of linguistic features of the translation. It was found the hypothesis about the universals of translated language (explicitation, normalisation) is true in one translation, but it is on the sacrifice of some original contextual connotations. Transliteration is purposefully used in the second translation to retain the original flavor, which is argued as a violation of the principle of relevance in communication because it yields little contextual effects and demands more processing effort of the reader. The translatability of conceptual vagueness in HDNJ is constrained by source language context and the reader’s cognitive environment.

Keywords: corpus-based translation, translatability, TCM classics, vague language

Procedia PDF Downloads 340
1287 Genodata: The Human Genome Variation Using BigData

Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta

Abstract:

Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.

Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop

Procedia PDF Downloads 223
1286 The Future of Reduced Instruction Set Computing and Complex Instruction Set Computing and Suggestions for Reduced Instruction Set Computing-V Development

Authors: Can Xiao, Ouanhong Jiang

Abstract:

Based on the two instruction sets of complex instruction set computing (CISC) and reduced instruction set computing (RISC), processors developed in their respective “expertise” fields. This paper will summarize research on the differences in performance and energy efficiency between CISC and RISC and strive to eliminate the influence of peripheral configuration factors. We will discuss whether processor performance is centered around instruction sets or implementation. In addition, the rapidly developing RISC-V poses a challenge to existing models. We will analyze research results, analyze the impact of instruction sets themselves, and finally make suggestions for the development of RISC-V.

Keywords: ISA, RISC-V, ARM, X86, power, energy efficiency

Procedia PDF Downloads 52
1285 REDUCER: An Architectural Design Pattern for Reducing Large and Noisy Data Sets

Authors: Apkar Salatian

Abstract:

To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article, we also show how REDUCER has successfully been applied to 3 different case studies.

Keywords: design pattern, filtering, compression, architectural design

Procedia PDF Downloads 175
1284 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 364
1283 Frequent Item Set Mining for Big Data Using MapReduce Framework

Authors: Tamanna Jethava, Rahul Joshi

Abstract:

Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.

Keywords: frequent item set mining, big data, Hadoop, MapReduce

Procedia PDF Downloads 384
1282 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 403
1281 Combining Real Actors with Virtual Sets: The Future of Immersive Virtual Reality Fiction Cinema

Authors: Nefeli Dimitriadi

Abstract:

This paper aims to present immersive cinema where real actors are filmed and integrated in Virtual Reality environments and 360 cinematic narrative, in comparison to 360 filming of real actors and sets and to fully computer graphics animation movies with 3D avatars. Objectives: This reseach aims to present immersive cinema where real actors are integrated in Virrual Reality environments and 360 cinematic narrative as the future of immersive cinema. Meghdology: A comparative analysis is conducted between real actors filming combined with Virtual Reality sets, to 360 filming of real actors and sets, and to fully computer graphics animation movies with 3D avatars, using as case study Virtual Reality movie Neurosynapses and others. Contribution: This reseach contributes in defining the best practices leading to impactful Immersive cinematic narratives.

Keywords: virtual reality, 360 movies, immersive cinema, directing for virtual reality

Procedia PDF Downloads 85
1280 Holomorphic Prioritization of Sets within Decagram of Strategic Decision Making of POSM Using Operational Research (OR): Analytic Hierarchy Process (AHP) Analysis

Authors: Elias Ogutu Azariah Tembe, Hussain Abdullah Habib Al-Salamin

Abstract:

There is decagram of strategic decisions of operations and production/service management (POSM) within operational research (OR) which must collate, namely: design, inventory, quality, location, process and capacity, layout, scheduling, maintain ace, and supply chain. This paper presents an architectural configuration conceptual framework of a decagram of sets decisions in a form of mathematical complete graph and abelian graph. Mathematically, a complete graph is undirected (UDG), and directed (DG) a relationship where every pair of vertices are connected, collated, confluent, and holomorphic. There has not been any study conducted which, however, prioritizes the holomorphic sets which of POMS within OR field of study. The study utilizes OR structured technique known as The Analytic Hierarchy Process (AHP) analysis for organizing, sorting and prioritizing (ranking) the sets within the decagram of POMS according to their attribution (propensity), and provides an analysis how the prioritization has real-world application within the 21st century.

Keywords: holomorphic, decagram, decagon, confluent, complete graph, AHP analysis, SCM, HRM, OR, OM, abelian graph

Procedia PDF Downloads 374
1279 The Various Forms of a Soft Set and Its Extension in Medical Diagnosis

Authors: Biplab Singha, Mausumi Sen, Nidul Sinha

Abstract:

In order to deal with the impreciseness and uncertainty of a system, D. Molodtsov has introduced the concept of ‘Soft Set’ in the year 1999. Since then, a number of related definitions have been conceptualized. This paper includes a study on various forms of Soft Sets with examples. The paper contains the concepts of domain and co-domain of a soft set, conversion to one-one and onto function, matrix representation of a soft set and its relation with one-one function, upper and lower triangular matrix, transpose and Kernel of a soft set. This paper also gives the idea of the extension of soft sets in medical diagnosis. Here, two soft sets related to disease and symptoms are considered and using AND operation and OR operation, diagnosis of the disease is calculated through appropriate examples.

Keywords: kernel of a soft set, soft set, transpose of a soft set, upper and lower triangular matrix of a soft set

Procedia PDF Downloads 305
1278 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks

Authors: Tripatjot S. Panag, J. S. Dhillon

Abstract:

The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.

Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN

Procedia PDF Downloads 420
1277 Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm

Authors: Kristian Bautista, Ruben A. Idoy

Abstract:

A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm.

Keywords: clustering, metaheuristics, collective animal behavior algorithm, density-based clustering, multimodal optimization

Procedia PDF Downloads 196
1276 Nano Generalized Topology

Authors: M. Y. Bakeir

Abstract:

Rough set theory is a recent approach for reasoning about data. It has achieved a large amount of applications in various real-life fields. The main idea of rough sets corresponds to the lower and upper set approximations. These two approximations are exactly the interior and the closure of the set with respect to a certain topology on a collection U of imprecise data acquired from any real-life field. The base of the topology is formed by equivalence classes of an equivalence relation E defined on U using the available information about data. The theory of generalized topology was studied by Cs´asz´ar. It is well known that generalized topology in the sense of Cs´asz´ar is a generalization of the topology on a set. On the other hand, many important collections of sets related with the topology on a set form a generalized topology. The notion of Nano topology was introduced by Lellis Thivagar, which was defined in terms of approximations and boundary region of a subset of an universe using an equivalence relation on it. The purpose of this paper is to introduce a new generalized topology in terms of rough set called nano generalized topology

Keywords: rough sets, topological space, generalized topology, nano topology

Procedia PDF Downloads 402
1275 Rank of Semigroup: Generating Sets and Cases Revealing Limitations of the Concept of Independence

Authors: Zsolt Lipcsey, Sampson Marshal Imeh

Abstract:

We investigate a certain characterisation for rank of a semigroup by Howie and Ribeiro (1999), to ascertain the relevance of the concept of independence. There are cases where the concept of independence fails to be useful for this purpose. One would expect the basic element to be the maximal independent subset of a given semigroup. However, we construct examples for semigroups where finite basis exist and the basis is larger than the number of independent elements.

Keywords: generating sets, independent set, rank, cyclic semigroup, basis, commutative

Procedia PDF Downloads 157
1274 A Deterministic Large Deviation Model Based on Complex N-Body Systems

Authors: David C. Ni

Abstract:

In the previous efforts, we constructed N-Body Systems by an extended Blaschke product (EBP), which represents a non-temporal and nonlinear extension of Lorentz transformation. In this construction, we rely only on two parameters, nonlinear degree, and relative momentum to characterize the systems. We further explored root computation via iteration with an algorithm extended from Jenkins-Traub method. The solution sets demonstrate a form of σ+ i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various canonical distributions. In this paper, we correlate the convergent sets in the original domain with solution sets, which demonstrating large-deviation distributions in the codomain. We proceed to compare our approach with the formula or principles, such as Donsker-Varadhan and Wentzell-Freidlin theories. The deterministic model based on this construction allows us to explore applications in the areas of finance and statistical mechanics.

Keywords: nonlinear Lorentz transformation, Blaschke equation, iteration solutions, root computation, large deviation distribution, deterministic model

Procedia PDF Downloads 363