Search results for: extraction of logical rules
1351 Generating Speq Rules based on Automatic Proof of Logical Equivalence
Authors: Katsunori Miura, Kiyoshi Akama, Hiroshi Mabuchi
Abstract:
In the Equivalent Transformation (ET) computation model, a program is constructed by the successive accumulation of ET rules. A method by meta-computation by which a correct ET rule is generated has been proposed. Although the method covers a broad range in the generation of ET rules, all important ET rules are not necessarily generated. Generation of more ET rules can be achieved by supplementing generation methods which are specialized for important ET rules. A Specialization-by-Equation (Speq) rule is one of those important rules. A Speq rule describes a procedure in which two variables included in an atom conjunction are equalized due to predicate constraints. In this paper, we propose an algorithm that systematically and recursively generate Speq rules and discuss its effectiveness in the synthesis of ET programs. A Speq rule is generated based on proof of a logical formula consisting of given atom set and dis-equality. The proof is carried out by utilizing some ET rules and the ultimately obtained rules in generating Speq rules.Keywords: Equivalent transformation, ET rule, Equation of two variables, Rule generation, Specialization-by-Equation rule
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12891350 Extraction of Symbolic Rules from Artificial Neural Networks
Authors: S. M. Kamruzzaman, Md. Monirul Islam
Abstract:
Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.Keywords: Backpropagation, clustering algorithm, constructivealgorithm, continuous activation function, pruning algorithm, ruleextraction algorithm, symbolic rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16151349 The Usefulness of Logical Structure in Flexible Document Categorization
Authors: Jebari Chaker, Ounalli Habib
Abstract:
This paper presents a new approach for automatic document categorization. Exploiting the logical structure of the document, our approach assigns a HTML document to one or more categories (thesis, paper, call for papers, email, ...). Using a set of training documents, our approach generates a set of rules used to categorize new documents. The approach flexibility is carried out with rule weight association representing your importance in the discrimination between possible categories. This weight is dynamically modified at each new document categorization. The experimentation of the proposed approach provides satisfactory results.Keywords: categorization rule, document categorization, flexible categorization, logical structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12441348 Fuzzy Rules Generation and Extraction from Support Vector Machine Based on Kernel Function Firing Signals
Authors: Prasan Pitiranggon, Nunthika Benjathepanun, Somsri Banditvilai, Veera Boonjing
Abstract:
Our study proposes an alternative method in building Fuzzy Rule-Based System (FRB) from Support Vector Machine (SVM). The first set of fuzzy IF-THEN rules is obtained through an equivalence of the SVM decision network and the zero-ordered Sugeno FRB type of the Adaptive Network Fuzzy Inference System (ANFIS). The second set of rules is generated by combining the first set based on strength of firing signals of support vectors using Gaussian kernel. The final set of rules is then obtained from the second set through input scatter partitioning. A distinctive advantage of our method is the guarantee that the number of final fuzzy IFTHEN rules is not more than the number of support vectors in the trained SVM. The final FRB system obtained is capable of performing classification with results comparable to its SVM counterpart, but it has an advantage over the black-boxed SVM in that it may reveal human comprehensible patterns.Keywords: Fuzzy Rule Base, Rule Extraction, Rule Generation, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19011347 Comparative Study of Decision Trees and Rough Sets Theory as Knowledge ExtractionTools for Design and Control of Industrial Processes
Authors: Marcin Perzyk, Artur Soroczynski
Abstract:
General requirements for knowledge representation in the form of logic rules, applicable to design and control of industrial processes, are formulated. Characteristic behavior of decision trees (DTs) and rough sets theory (RST) in rules extraction from recorded data is discussed and illustrated with simple examples. The significance of the models- drawbacks was evaluated, using simulated and industrial data sets. It is concluded that performance of DTs may be considerably poorer in several important aspects, compared to RST, particularly when not only a characterization of a problem is required, but also detailed and precise rules are needed, according to actual, specific problems to be solved.Keywords: Knowledge extraction, decision trees, rough setstheory, industrial processes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16321346 Mining Association Rules from Unstructured Documents
Authors: Hany Mahgoub
Abstract:
This paper presents a system for discovering association rules from collections of unstructured documents called EART (Extract Association Rules from Text). The EART system treats texts only not images or figures. EART discovers association rules amongst keywords labeling the collection of textual documents. The main characteristic of EART is that the system integrates XML technology (to transform unstructured documents into structured documents) with Information Retrieval scheme (TF-IDF) and Data Mining technique for association rules extraction. EART depends on word feature to extract association rules. It consists of four phases: structure phase, index phase, text mining phase and visualization phase. Our work depends on the analysis of the keywords in the extracted association rules through the co-occurrence of the keywords in one sentence in the original text and the existing of the keywords in one sentence without co-occurrence. Experiments applied on a collection of scientific documents selected from MEDLINE that are related to the outbreak of H5N1 avian influenza virus.Keywords: Association rules, information retrieval, knowledgediscovery in text, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24411345 A New Model for Discovering XML Association Rules from XML Documents
Authors: R. AliMohammadzadeh, M. Rahgozar, A. Zarnani
Abstract:
The inherent flexibilities of XML in both structure and semantics makes mining from XML data a complex task with more challenges compared to traditional association rule mining in relational databases. In this paper, we propose a new model for the effective extraction of generalized association rules form a XML document collection. We directly use frequent subtree mining techniques in the discovery process and do not ignore the tree structure of data in the final rules. The frequent subtrees based on the user provided support are split to complement subtrees to form the rules. We explain our model within multi-steps from data preparation to rule generation.Keywords: XML, Data Mining, Association Rule Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16301344 Approximate Bounded Knowledge Extraction Using Type-I Fuzzy Logic
Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil
Abstract:
Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Keywords: Crisp neural networks, fuzzy systems, extraction of logical rules, quasi-fuzzy numbers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17391343 Extended Deductive Databases with Uncertain Information
Authors: Daniel Stamate
Abstract:
The paper presents an approach for handling uncertain information in deductive databases using multivalued logics. Uncertainty means that database facts may be assigned logical values other than the conventional ones - true and false. The logical values represent various degrees of truth, which may be combined and propagated by applying the database rules. A corresponding multivalued database semantics is defined. We show that it extends successful conventional semantics as the well-founded semantics, and has a polynomial time data complexity.Keywords: Reasoning under uncertainty, multivalued logics, deductive databases, logic programs, multivalued semantics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13461342 Post Mining- Discovering Valid Rules from Different Sized Data Sources
Authors: R. Nedunchezhian, K. Anbumani
Abstract:
A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.
Keywords: Association rules, multiple data stores, synthesizing, valid rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14021341 Dynamic Attribute Dependencies in Relational Attribute Grammars
Authors: K. Barbar, M. Dehayni, A. Awada, M. Smaili
Abstract:
Considering the theory of attribute grammars, we use logical formulas instead of traditional functional semantic rules. Following the decoration of a derivation tree, a suitable algorithm should maintain the consistency of the formulas together with the evaluation of the attributes. This may be a Prolog-like resolution, but this paper examines a somewhat different strategy, based on production specialization, local consistency and propagation: given a derivation tree, it is interactively decorated, i.e. incrementally checked and evaluated. The non-directed dependencies are dynamically directed during attribute evaluation.Keywords: Input/Output attribute grammars, local consistency, logical programming, propagation, relational attribute grammars.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14581340 An Adaptive Fuzzy Clustering Approach for the Network Management
Authors: Amal Elmzabi, Mostafa Bellafkih, Mohammed Ramdani
Abstract:
The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.
Keywords: Fuzzy entropy, fuzzy inference systems, genetic algorithms, network management, subtractive clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18811339 A New Approach for Flexible Document Categorization
Authors: Jebari Chaker, Ounelli Habib
Abstract:
In this paper we propose a new approach for flexible document categorization according to the document type or genre instead of topic. Our approach implements two homogenous classifiers: contextual classifier and logical classifier. The contextual classifier is based on the document URL, whereas, the logical classifier use the logical structure of the document to perform the categorization. The final categorization is obtained by combining contextual and logical categorizations. In our approach, each document is assigned to all predefined categories with different membership degrees. Our experiments demonstrate that our approach is best than other genre categorization approaches.
Keywords: Categorization, combination, flexible, logicalstructure, genre, category, URL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14821338 Eclectic Rule-Extraction from Support Vector Machines
Authors: Nahla Barakat, Joachim Diederich
Abstract:
Support vector machines (SVMs) have shown superior performance compared to other machine learning techniques, especially in classification problems. Yet one limitation of SVMs is the lack of an explanation capability which is crucial in some applications, e.g. in the medical and security domains. In this paper, a novel approach for eclectic rule-extraction from support vector machines is presented. This approach utilizes the knowledge acquired by the SVM and represented in its support vectors as well as the parameters associated with them. The approach includes three stages; training, propositional rule-extraction and rule quality evaluation. Results from four different experiments have demonstrated the value of the approach for extracting comprehensible rules of high accuracy and fidelity.Keywords: Data mining, hybrid rule-extraction algorithms, medical diagnosis, SVMs
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17071337 A Text Mining Technique Using Association Rules Extraction
Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey
Abstract:
This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Keywords: Text mining, data mining, association rule mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44361336 An Automatic Feature Extraction Technique for 2D Punch Shapes
Authors: Awais Ahmad Khan, Emad Abouel Nasr, H. M. A. Hussein, Abdulrahman Al-Ahmari
Abstract:
Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning.
Keywords: Feature Extraction, Internal Features, Punch Shapes, Sheet metal, STEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20921335 Mechanisms of Ginger Bioactive Compounds Extract Using Soxhlet and Accelerated Water Extraction
Authors: M. N. Azian, A. N. Ilia Anisa, Y. Iwai
Abstract:
The mechanism for extraction bioactive compounds from plant matrix is essential for optimizing the extraction process. As a benchmark technique, a soxhlet extraction has been utilized for discussing the mechanism and compared with an accelerated water extraction. The trends of both techniques show that the process involves extraction and degradation. The highest yields of 6-, 8-, 10-gingerols and 6-shogaol in soxhlet extraction were 13.948, 7.12, 10.312 and 2.306 mg/g, respectively. The optimum 6-, 8-, 10-gingerols and 6-shogaol extracted by the accelerated water extraction at 140oC were 68.97±3.95 mg/g at 3min, 18.98±3.04 mg/g at 5min, 5.167±2.35 mg/g at 3min and 14.57±6.27 mg/g at 3min, respectively. The effect of temperature at 3mins shows that the concentration of 6-shogaol increased rapidly as decreasing the recovery of 6-gingerol.
Keywords: Mechanism, bioactive compounds, soxhlet extraction, accelerated water extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53701334 Optimising Business Rules in the Services Sector
Authors: Alan Dormer
Abstract:
Business rules are widely used within the services sector. They provide consistency and allow relatively unskilled staff to process complex transactions correctly. But there are many examples where the rules themselves have an impact on the costs and profits of an organisation. Financial services, transport and human services are areas where the rules themselves can impact the bottom line in a predictable way. If this is the case, how can we find that set of rules that maximise profit, performance or customer service, or any other key performance indicators? The manufacturing, energy and process industries have embraced mathematical optimisation techniques to improve efficiency, increase production and so on. This paper explores several real world (but simplified) problems in the services sector and shows how business rules can be optimised. It also examines the similarities and differences between the service and other sectors, and how optimisation techniques could be used to deliver similar benefits.Keywords: Business rules, services, optimisation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16211333 Semi-Automatic Method to Assist Expert for Association Rules Validation
Authors: Amdouni Hamida, Gammoudi Mohamed Mohsen
Abstract:
In order to help the expert to validate association rules extracted from data, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one depends on a fixed threshold and on data quality from which the rules are extracted. The second one consists on providing to the expert some tools in the objective to explore and visualize rules during the evaluation step. However, the number of extracted rules to validate remains high. Thus, the manually mining rules task is very hard. To solve this problem, we propose, in this paper, a semi-automatic method to assist the expert during the association rule's validation. Our method uses rule-based classification as follow: (i) We transform association rules into classification rules (classifiers), (ii) We use the generated classifiers for data classification. (iii) We visualize association rules with their quality classification to give an idea to the expert and to assist him during validation process.Keywords: Association rules, Rule-based classification, Classification quality, Validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17911332 Investigations on Some Operations of Soft Sets
Authors: Xun Ge, Songlin Yang
Abstract:
Soft set theory was initiated by Molodtsov in 1999. In the past years, this theory had been applied to many branches of mathematics, information science and computer science. In 2003, Maji et al. introduced some operations of soft sets and gave some operational rules. Recently, some of these operational rules are pointed out to be not true. Furthermore, Ali et al., in their paper, introduced and discussed some new operations of soft sets. In this paper, we further investigate these operational rules given by Maji et al. and Ali et al.. We obtain some sufficient-necessary conditions such that corresponding operational rules hold and give correct forms for some operational rules. These results will be help for us to use rightly operational rules of soft sets in research and application of soft set theory.
Keywords: Soft sets, union, intersection, complement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17151331 Data Acquisition from Cell Phone using Logical Approach
Authors: Keonwoo Kim, Dowon Hong, Kyoil Chung, Jae-Cheol Ryou
Abstract:
Cell phone forensics to acquire and analyze data in the cellular phone is nowadays being used in a national investigation organization and a private company. In order to collect cellular phone flash memory data, we have two methods. Firstly, it is a logical method which acquires files and directories from the file system of the cell phone flash memory. Secondly, we can get all data from bit-by-bit copy of entire physical memory using a low level access method. In this paper, we describe a forensic tool to acquire cell phone flash memory data using a logical level approach. By our tool, we can get EFS file system and peek memory data with an arbitrary region from Korea CDMA cell phone.Keywords: Forensics, logical method, acquisition, cell phone, flash memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41201330 Extraction Condition of Echinocactus grusonii
Authors: R. Oonsivilai, N. Chaijareonudomroung, Y. Huantanom, A. Oonsivilai
Abstract:
The optimal extraction condition of dried Echinocactus grusonii powder was studied. The three independent variables are raw material drying temperature, extraction temperature, and extraction time. The dependent variables are both yield percentage of crude extract and total phenolic quantification as gallic acid equivalent in crude extract. The experimental design was based on central composite design. Highest yield percentage of crude extract could get from extraction condition at raw material drying temperature at 60°C, extraction temperature at 15°C, and extraction time for 25 min °C. Moreover, the crude extract with highest phenolic occurred by extraction condition of raw material drying temperature at 60°C, extraction temperature at 35 °C, and extraction lasting 25 min.Keywords: Drying temperature, Extraction temperature, Optimal condition, Total phenolic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21851329 Influence of Paralleled Capacitance Effect in Well-defined Multiple Value Logical Level System with Active Load
Authors: Chih Chin Yang, Yen Chun Lin, Hsiao Hsuan Cheng
Abstract:
Three similar negative differential resistance (NDR) profiles with both high peak to valley current density ratio (PVCDR) value and high peak current density (PCD) value in unity resonant tunneling electronic circuit (RTEC) element is developed in this paper. The PCD values and valley current density (VCD) values of the three NDR curves are all about 3.5 A and 0.8 A, respectively. All PV values of NDR curves are 0.40 V, 0.82 V, and 1.35 V, respectively. The VV values are 0.61 V, 1.07 V, and 1.69 V, respectively. All PVCDR values reach about 4.4 in three NDR curves. The PCD value of 3.5 A in triple PVCDR RTEC element is better than other resonant tunneling devices (RTD) elements. The high PVCDR value is concluded the lower VCD value about 0.8 A. The low VCD value is achieved by suitable selection of resistors in triple PVCDR RTEC element. The low PV value less than 1.35 V possesses low power dispersion in triple PVCDR RTEC element. The designed multiple value logical level (MVLL) system using triple PVCDR RTEC element provides equidistant logical level. The logical levels of MVLL system are about 0.2 V, 0.8 V, 1.5 V, and 2.2 V from low voltage to high voltage and then 2.2 V, 1.3 V, 0.8 V, and 0.2 V from high voltage back to low voltage in half cycle of sinusoid wave. The output level of four levels MVLL system is represented in 0.3 V, 1.1 V, 1.7 V, and 2.6 V, which satisfies the NMP condition of traditional two-bit system. The remarkable logical characteristic of improved MVLL system with paralleled capacitor are with four significant stable logical levels about 220 mV, 223 mV, 228 mV, and 230 mV. The stability and articulation of logical levels of improved MVLL system are outstanding. The average holding time of improved MVLL system is approximately 0.14 μs. The holding time of improved MVLL system is fourfold than of basic MVLL system. The function of additional capacitor in the improved MVLL system is successfully discovered.Keywords: Capacitance, Logical level, Constant current source
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13891328 Rule Insertion Technique for Dynamic Cell Structure Neural Network
Authors: Osama Elsarrar, Marjorie Darrah, Richard Devin
Abstract:
This paper discusses the idea of capturing an expert’s knowledge in the form of human understandable rules and then inserting these rules into a dynamic cell structure (DCS) neural network. The DCS is a form of self-organizing map that can be used for many purposes, including classification and prediction. This particular neural network is considered to be a topology preserving network that starts with no pre-structure, but assumes a structure once trained. The DCS has been used in mission and safety-critical applications, including adaptive flight control and health-monitoring in aerial vehicles. The approach is to insert expert knowledge into the DCS before training. Rules are translated into a pre-structure and then training data are presented. This idea has been demonstrated using the well-known Iris data set and it has been shown that inserting the pre-structure results in better accuracy with the same training.
Keywords: Neural network, rule extraction, rule insertion, self-organizing map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5301327 Proposition for a New Approach of Version Control System Based On ECA Active Rules
Authors: S. Benhamed, S. Hocine, D. Benhamamouch
Abstract:
We try to give a solution of version control for documents in web service, that-s why we propose a new approach used specially for the XML documents. The new approach is applied in a centralized repository, this repository coexist with other repositories in a decentralized system. To achieve the activities of this approach in a standard model we use the ECA active rules. We also show how the Event-Condition-Action rules (ECA rules) have been incorporated as a mechanism for the version control of documents. The need to integrate ECA rules is that it provides a clear declarative semantics and induces an immediate operational realization in the system without the need for human intervention.Keywords: ECA Rule, Web service, version control system, propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13731326 Remarks on Some Properties of Decision Rules
Authors: Songlin Yang, Ying Ge
Abstract:
This paper shows that some properties of the decision rules in the literature do not hold by presenting a counterexample. We give sufficient and necessary conditions under which these properties are valid. These results will be helpful when one tries to choose the right decision rules in the research of rough set theory.Keywords: set, Decision table, Decision rule, coverage factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14121325 Calculation of Inflation from Salaries Instead of Consumer Products: A Logical Exercise
Authors: E. Dahlen
Abstract:
Inflation can be calculated from either the prices of consumer products or from salaries. This paper presents a logical exercise that shows it is easier to calculate inflation from salaries than from consumer products. While the prices of consumer products may change due to technological advancement, such as automation, which must be corrected for, salaries do not. If technological advancements are not accounted for within calculations based on consumer product prices, inflation can be confused with real wage changes, since both inflation and real wage changes affect the prices of consumer products. The method employed in this paper is a logical exercise. Logical arguments are presented that suggest the existence of many different feasible ways by which inflation can be determined. Then a short mathematical exercise will be presented which shows that one of these methods –using salaries – contains the fewest number of unknown parameters, and hence, is the preferred method, since the risk of mistakes is lower. From the results, it can be concluded that salaries, rather than consumer products, should be used to calculate inflation.
Keywords: Inflation, logic, math, real wages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7171324 Discovery of Fuzzy Censored Production Rules from Large Set of Discovered Fuzzy if then Rules
Authors: Tamanna Siddiqui, M. Afshar Alam
Abstract:
Censored Production Rule is an extension of standard production rule, which is concerned with problems of reasoning with incomplete information, subject to resource constraints and problem of reasoning efficiently with exceptions. A CPR has a form: IF A (Condition) THEN B (Action) UNLESS C (Censor), Where C is the exception condition. Fuzzy CPR are obtained by augmenting ordinary fuzzy production rule “If X is A then Y is B with an exception condition and are written in the form “If X is A then Y is B Unless Z is C. Such rules are employed in situation in which the fuzzy conditional statement “If X is A then Y is B" holds frequently and the exception condition “Z is C" holds rarely. Thus “If X is A then Y is B" part of the fuzzy CPR express important information while the unless part acts only as a switch that changes the polarity of “Y is B" to “Y is not B" when the assertion “Z is C" holds. The proposed approach is an attempt to discover fuzzy censored production rules from set of discovered fuzzy if then rules in the form: A(X) ÔçÆ B(Y) || C(Z).Keywords: Uncertainty Quantification, Fuzzy if then rules, Fuzzy Censored Production Rules, Learning algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14841323 Sensitizing Rules for Fuzzy Control Charts
Authors: N. Pekin Alakoç, A. Apaydın
Abstract:
Quality control charts indicate out of control conditions if any nonrandom pattern of the points is observed or any point is plotted beyond the control limits. Nonrandom patterns of Shewhart control charts are tested with sensitizing rules. When the processes are defined with fuzzy set theory, traditional sensitizing rules are insufficient for defining all out of control conditions. This is due to the fact that fuzzy numbers increase the number of out of control conditions. The purpose of the study is to develop a set of fuzzy sensitizing rules, which increase the flexibility and sensitivity of fuzzy control charts. Fuzzy sensitizing rules simplify the identification of out of control situations that results in a decrease in the calculation time and number of evaluations in fuzzy control chart approach.Keywords: Fuzzy set theory, Quality control charts, Run Rules, Unnatural patterns.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35401322 CFD Simulation of Dense Gas Extraction through Polymeric Membranes
Authors: Azam Marjani, Saeed Shirazian
Abstract:
In this study is presented a general methodology to predict the performance of a continuous near-critical fluid extraction process to remove compounds from aqueous solutions using hollow fiber membrane contactors. A comprehensive 2D mathematical model was developed to study Porocritical extraction process. The system studied in this work is a membrane based extractor of ethanol and acetone from aqueous solutions using near-critical CO2. Predictions of extraction percentages obtained by simulations have been compared to the experimental values reported by Bothun et al. [5]. Simulations of extraction percentage of ethanol and acetone show an average difference of 9.3% and 6.5% with the experimental data, respectively. More accurate predictions of the extraction of acetone could be explained by a better estimation of the transport properties in the aqueous phase that controls the extraction of this solute.Keywords: Solvent extraction, Membrane, Mass transfer, Densegas, Modeling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591