Search results for: GIS knowledge discovery
7938 Knowledge Discovery from Production Databases for Hierarchical Process Control
Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata
Abstract:
The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system, thus, the proposed solution has been verified. The paper documents how it is possible to apply new discovery knowledge to be used in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.Keywords: hierarchical process control, knowledge discovery from databases, neural network, process control
Procedia PDF Downloads 4817937 Data Mining As A Tool For Knowledge Management: A Review
Authors: Maram Saleh
Abstract:
Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.
Procedia PDF Downloads 2087936 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 3367935 Application of Data Mining Techniques for Tourism Knowledge Discovery
Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee
Abstract:
Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.Keywords: classification algorithms, data mining, knowledge discovery, tourism
Procedia PDF Downloads 2957934 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction
Authors: Yumeng Ma, Fang Wang, Jinxia Huang
Abstract:
Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base
Procedia PDF Downloads 1637933 Intuitional Insight in Islamic Mysticism
Authors: Maryam Bakhtyar, Pegah Akrami
Abstract:
Intuitional insight or mystical cognition is a different insight from common, concrete and intellectual insights. This kind of insight is not achieved by visionary contemplation but by the recitation of God, self-purification, and mystical life. In this insight, there is no distance or medium between the subject of cognition and its object, and they have a sort of unification, unison, and incorporation. As a result, knowledgeable consider this insight as direct, immediate, and personal. The goal of this insight is God, cosmos’ creatures, and the general inner and hidden aspect of the world that is nothing except God’s manifestations in the view of mystics. AS our common cognitions have diversity and stages, intuitional insight also has diversity and levels. As our senses are divided into concrete and rational, mystical discovery is divided into superficial discovery and spiritual one. Based on Islamic mystics, the preferable way to know God and believe in him is intuitional insight. There are two important criteria for evaluating mystical intuition, especially for beginner mystics of intellect and revelation. Indeed, the conclusion and a brief evaluation of Islamic mystics’ viewpoint is the main subject of this paper.Keywords: intuition, discovery, mystical insight, personal knowledge, superficial discovery, spiritual discovery
Procedia PDF Downloads 947932 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 647931 Algorithms used in Spatial Data Mining GIS
Authors: Vahid Bairami Rad
Abstract:
Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining
Procedia PDF Downloads 4627930 Review and Comparison of Associative Classification Data Mining Approaches
Authors: Suzan Wedyan
Abstract:
Data mining is one of the main phases in the Knowledge Discovery Database (KDD) which is responsible of finding hidden and useful knowledge from databases. There are many different tasks for data mining including regression, pattern recognition, clustering, classification, and association rule. In recent years a promising data mining approach called associative classification (AC) has been proposed, AC integrates classification and association rule discovery to build classification models (classifiers). This paper surveys and critically compares several AC algorithms with reference of the different procedures are used in each algorithm, such as rule learning, rule sorting, rule pruning, classifier building, and class allocation for test cases.Keywords: associative classification, classification, data mining, learning, rule ranking, rule pruning, prediction
Procedia PDF Downloads 5377929 Medical Knowledge Management since the Integration of Heterogeneous Data until the Knowledge Exploitation in a Decision-Making System
Authors: Nadjat Zerf Boudjettou, Fahima Nader, Rachid Chalal
Abstract:
Knowledge management is to acquire and represent knowledge relevant to a domain, a task or a specific organization in order to facilitate access, reuse and evolution. This usually means building, maintaining and evolving an explicit representation of knowledge. The next step is to provide access to that knowledge, that is to say, the spread in order to enable effective use. Knowledge management in the medical field aims to improve the performance of the medical organization by allowing individuals in the care facility (doctors, nurses, paramedics, etc.) to capture, share and apply collective knowledge in order to make optimal decisions in real time. In this paper, we propose a knowledge management approach based on integration technique of heterogeneous data in the medical field by creating a data warehouse, a technique of extracting knowledge from medical data by choosing a technique of data mining, and finally an exploitation technique of that knowledge in a case-based reasoning system.Keywords: data warehouse, data mining, knowledge discovery in database, KDD, medical knowledge management, Bayesian networks
Procedia PDF Downloads 3957928 Application of Knowledge Discovery in Database Techniques in Cost Overruns of Construction Projects
Authors: Mai Ghazal, Ahmed Hammad
Abstract:
Cost overruns in construction projects are considered as worldwide challenges since the cost performance is one of the main measures of success along with schedule performance. To overcome this problem, studies were conducted to investigate the cost overruns' factors, also projects' historical data were analyzed to extract new and useful knowledge from it. This research is studying and analyzing the effect of some factors causing cost overruns using the historical data from completed construction projects. Then, using these factors to estimate the probability of cost overrun occurrence and predict its percentage for future projects. First, an intensive literature review was done to study all the factors that cause cost overrun in construction projects, then another review was done for previous researcher papers about mining process in dealing with cost overruns. Second, a proposed data warehouse was structured which can be used by organizations to store their future data in a well-organized way so it can be easily analyzed later. Third twelve quantitative factors which their data are frequently available at construction projects were selected to be the analyzed factors and suggested predictors for the proposed model.Keywords: construction management, construction projects, cost overrun, cost performance, data mining, data warehousing, knowledge discovery, knowledge management
Procedia PDF Downloads 3717927 Machine Learning Methods for Network Intrusion Detection
Authors: Mouhammad Alkasassbeh, Mohammad Almseidin
Abstract:
Network security engineers work to keep services available all the time by handling intruder attacks. Intrusion Detection System (IDS) is one of the obtainable mechanisms that is used to sense and classify any abnormal actions. Therefore, the IDS must be always up to date with the latest intruder attacks signatures to preserve confidentiality, integrity, and availability of the services. The speed of the IDS is a very important issue as well learning the new attacks. This research work illustrates how the Knowledge Discovery and Data Mining (or Knowledge Discovery in Databases) KDD dataset is very handy for testing and evaluating different Machine Learning Techniques. It mainly focuses on the KDD preprocess part in order to prepare a decent and fair experimental data set. The J48, MLP, and Bayes Network classifiers have been chosen for this study. It has been proven that the J48 classifier has achieved the highest accuracy rate for detecting and classifying all KDD dataset attacks, which are of type DOS, R2L, U2R, and PROBE. Procedia PDF Downloads 2357926 Network Word Discovery Framework Based on Sentence Semantic Vector Similarity
Authors: Ganfeng Yu, Yuefeng Ma, Shanliang Yang
Abstract:
The word discovery is a key problem in text information retrieval technology. Methods in new word discovery tend to be closely related to words because they generally obtain new word results by analyzing words. With the popularity of social networks, individual netizens and online self-media have generated various network texts for the convenience of online life, including network words that are far from standard Chinese expression. How detect network words is one of the important goals in the field of text information retrieval today. In this paper, we integrate the word embedding model and clustering methods to propose a network word discovery framework based on sentence semantic similarity (S³-NWD) to detect network words effectively from the corpus. This framework constructs sentence semantic vectors through a distributed representation model, uses the similarity of sentence semantic vectors to determine the semantic relationship between sentences, and finally realizes network word discovery by the meaning of semantic replacement between sentences. The experiment verifies that the framework not only completes the rapid discovery of network words but also realizes the standard word meaning of the discovery of network words, which reflects the effectiveness of our work.Keywords: text information retrieval, natural language processing, new word discovery, information extraction
Procedia PDF Downloads 957925 CERD: Cost Effective Route Discovery in Mobile Ad Hoc Networks
Authors: Anuradha Banerjee
Abstract:
A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence, we require energy efficient route discovery technique to enhance their lifetime and network performance. In this paper, we propose an energy-efficient route discovery technique CERD that greatly reduces the number of route requests flooded into the network and also gives priority to the route request packets sent from the routers that has communicated with the destination very recently, in single or multi-hop paths. This does not only enhance the lifetime of nodes but also decreases the delay in tracking the destination.Keywords: ad hoc network, energy efficiency, flooding, node lifetime, route discovery
Procedia PDF Downloads 3477924 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, Bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 4457923 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients
Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori
Abstract:
Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.Keywords: asthma, datamining, classification, machine learning
Procedia PDF Downloads 4477922 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 4207921 Signal Strength Based Multipath Routing for Mobile Ad Hoc Networks
Authors: Chothmal
Abstract:
In this paper, we present a route discovery process which uses the signal strength on a link as a parameter of its inclusion in the route discovery method. The proposed signal-to-interference and noise ratio (SINR) based multipath reactive routing protocol is named as SINR-MP protocol. The proposed SINR-MP routing protocols has two following two features: a) SINR-MP protocol selects routes based on the SINR of the links during the route discovery process therefore it select the routes which has long lifetime and low frame error rate for data transmission, and b) SINR-MP protocols route discovery process is multipath which discovers more than one SINR based route between a given source destination pair. The multiple routes selected by our SINR-MP protocol are node-disjoint in nature which increases their robustness against link failures, as failure of one route will not affect the other route. The secondary route is very useful in situations where the primary route is broken because we can now use the secondary route without causing a new route discovery process. Due to this, the network overhead caused by a route discovery process is avoided. This increases the network performance greatly. The proposed SINR-MP routing protocol is implemented in the trail version of network simulator called Qualnet.Keywords: ad hoc networks, quality of service, video streaming, H.264/SVC, multiple routes, video traces
Procedia PDF Downloads 2497920 Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study
Authors: Zeba Mahmood
Abstract:
The modern business organizations are adopting technological advancement to achieve competitive edge and satisfy their consumer. The development in the field of Information technology systems has changed the way of conducting business today. Business operations today rely more on the data they obtained and this data is continuously increasing in volume. The data stored in different locations is difficult to find and use without the effective implementation of Data mining and Knowledge management techniques. Organizations who smartly identify, obtain and then convert data in useful formats for their decision making and operational improvements create additional value for their customers and enhance their operational capabilities. Marketers and Customer relationship departments of firm use Data mining techniques to make relevant decisions, this paper emphasizes on the identification of different data mining and Knowledge management techniques that are applied to different business industries. The challenges and issues of execution of these techniques are also discussed and critically analyzed in this paper.Keywords: knowledge, knowledge management, knowledge discovery in databases, business, operational, information, data mining
Procedia PDF Downloads 5387919 Curriculum Check in Industrial Design, Based on Knowledge Management in Iran Universities
Authors: Maryam Mostafaee, Hassan Sadeghi Naeini, Sara Mostowfi
Abstract:
Today’s Knowledge management (KM), plays an important role in organizations. Basically, knowledge management is in the relation of using it for taking advantage of work forces in an organization for forwarding the goals and demand of that organization used at the most. The purpose of knowledge management is not only to manage existing documentation, information, and Data through an organization, but the most important part of KM is to control most important and key factor of those information and Data. For sure it is to chase the information needed for the employees in the right time of needed to take from genuine source for bringing out the best performance and result then in this matter the performance of organization will be at most of it. There are a lot of definitions over the objective of management released. Management is the science that in force the accurate knowledge with repeating to the organization to shape it and take full advantages for reaching goals and targets in the organization to be used by employees and users, but the definition of Knowledge based on Kalinz dictionary is: Facts, emotions or experiences known by man or group of people is ‘ knowledge ‘: Based on the Merriam Webster Dictionary: the act or skill of controlling and making decision about a business, department, sport team, etc, based on the Oxford Dictionary: Efficient handling of information and resources within a commercial organization, and based on the Oxford Dictionary: The art or process of designing manufactured products: the scale is a beautiful work of industrial design. When knowledge management performed executive in universities, discovery and create a new knowledge be facilitated. Make procedures between different units for knowledge exchange. College's officials and employees understand the importance of knowledge for University's success and will make more efforts to prevent the errors. In this strategy, is explored factors and affective trends and manage of it in University. In this research, Iranian universities for a time being analyzed that over usage of knowledge management, how they are behaving and having understood this matter: 1. Discovery of knowledge management in Iranian Universities, 2. Transferring exciting knowledge between faculties and unites, 3. Participate of employees for getting and using and transferring knowledge, 4.The accessibility of valid sources, 5. Researching over factors and correct processes in the university. We are pointing in some examples that we have already analyzed which is: -Enabling better and faster decision-making, -Making it easy to find relevant information and resources, -Reusing ideas, documents, and expertise, -Avoiding redundant effort. Consequence: It is found that effectiveness of knowledge management in the Industrial design field is low. Based on filled checklist by Education officials and professors in universities, and coefficient of effectiveness Calculate, knowledge management could not get the right place.Keywords: knowledge management, industrial design, educational curriculum, learning performance
Procedia PDF Downloads 3707918 Lightweight Cryptographically Generated Address for IPv6 Neighbor Discovery
Authors: Amjed Sid Ahmed, Rosilah Hassan, Nor Effendy Othman
Abstract:
Limited functioning of the Internet Protocol version 4 (IPv4) has necessitated the development of the Internetworking Protocol next generation (IPng) to curb the challenges. Indeed, the IPng is also referred to as the Internet Protocol version 6 (IPv6) and includes the Neighbor Discovery Protocol (NDP). The latter performs the role of Address Auto-configuration, Router Discovery (RD), and Neighbor Discovery (ND). Furthermore, the role of the NDP entails redirecting the service, detecting the duplicate address, and detecting the unreachable services. Despite the fact that there is an NDP’s assumption regarding the existence of trust the links’ nodes, several crucial attacks may affect the Protocol. Internet Engineering Task Force (IETF) therefore has recommended implementation of Secure Neighbor Discovery Protocol (SEND) to tackle safety issues in NDP. The SEND protocol is mainly used for validation of address rights, malicious response inhibiting techniques and finally router certification procedures. For routine running of these tasks, SEND utilizes on the following options, Cryptographically Generated Address (CGA), RSA Signature, Nonce and Timestamp option. CGA is produced at extra high costs making it the most notable disadvantage of SEND. In this paper a clear description of the constituents of CGA, its operation and also recommendations for improvements in its generation are given.Keywords: CGA, IPv6, NDP, SEND
Procedia PDF Downloads 3857917 Education and Development: An Overview of Islam
Authors: Rasheed Sanusi Adeleke
Abstract:
Several attempts have been made by scholars, both medieval and contemporary on the impact of Islam on scientific discovery. Lesser attention, however, is always accorded to the historical antecedents of the earlier Muslim scholars, who made frantic efforts towards the discoveries. Islam as a divine religion places high premium on the acquisition of knowledge especially that of sciences. It considers knowledge as a comprehensive whole, which covers both spiritual and material aspects of human life. Islam torches every aspect of human life for the growth, development and advancement of society. Acquisition of knowledge of humanity, social sciences as well as the pure and applied sciences is comprehensively expressed in Islamic education. Not only this, the history portrays the leading indelible roles played by the early Muslims on these various fields of knowledge. That is why Islam has declared acquisition of knowledge compulsory for all Muslims. This paper therefore analyses the contributions of Islam to civilization with particular reference to sciences. It also affirms that Islam is beyond the religion of prayers and rituals. The work is historic, analytic and explorative in nature. Recommendations are also also put forward as suggestions for the present generation cum posterity in general and Muslims in particular.Keywords: education, development, Islam, development and Islam
Procedia PDF Downloads 4367916 A Review on Existing Challenges of Data Mining and Future Research Perspectives
Authors: Hema Bhardwaj, D. Srinivasa Rao
Abstract:
Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges
Procedia PDF Downloads 1107915 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 987914 Ontology as Knowledge Capture Tool in Organizations: A Literature Review
Authors: Maria Margaretha, Dana Indra Sensuse, Lukman
Abstract:
Knowledge capture is a step in knowledge life cycle to get knowledge in the organization. Tacit and explicit knowledge are needed to organize in a path, so the organization will be easy to choose which knowledge will be use. There are many challenges to capture knowledge in the organization, such as researcher must know which knowledge has been validated by an expert, how to get tacit knowledge from experts and make it explicit knowledge, and so on. Besides that, the technology will be a reliable tool to help the researcher to capture knowledge. Some paper wrote how ontology in knowledge management can be used for proposed framework to capture and reuse knowledge. Organization has to manage their knowledge, process capture and share will decide their position in the business area. This paper will describe further from literature review about the tool of ontology that will help the organization to capture its knowledge.Keywords: knowledge capture, ontology, technology, organization
Procedia PDF Downloads 6067913 Classification Rule Discovery by Using Parallel Ant Colony Optimization
Authors: Waseem Shahzad, Ayesha Tahir Khan, Hamid Hussain Awan
Abstract:
Ant-Miner algorithm that lies under ACO algorithms is used to extract knowledge from data in the form of rules. A variant of Ant-Miner algorithm named as cAnt-MinerPB is used to generate list of rules using pittsburgh approach in order to maintain the rule interaction among the rules that are generated. In this paper, we propose a parallel Ant MinerPB in which Ant colony optimization algorithm runs parallel. In this technique, a data set is divided vertically (i-e attributes) into different subsets. These subsets are created based on the correlation among attributes using Mutual Information (MI). It generates rules in a parallel manner and then merged to form a final list of rules. The results have shown that the proposed technique achieved higher accuracy when compared with original cAnt-MinerPB and also the execution time has also reduced.Keywords: ant colony optimization, parallel Ant-MinerPB, vertical partitioning, classification rule discovery
Procedia PDF Downloads 2957912 A Study of Various Ontology Learning Systems from Text and a Look into Future
Authors: Fatima Al-Aswadi, Chan Yong
Abstract:
With the large volume of unstructured data that increases day by day on the web, the motivation of representing the knowledge in this data in the machine processable form is increased. Ontology is one of the major cornerstones of representing the information in a more meaningful way on the semantic Web. The goal of Ontology learning from text is to elicit and represent domain knowledge in the machine readable form. This paper aims to give a follow-up review on the ontology learning systems from text and some of their defects. Furthermore, it discusses how far the ontology learning process will enhance in the future.Keywords: concept discovery, deep learning, ontology learning, semantic relation, semantic web
Procedia PDF Downloads 5237911 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features
Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi
Abstract:
Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation
Procedia PDF Downloads 7337910 Research on Fuzzy Test Framework Based on Concolic Execution
Authors: Xiong Xie, Yuhang Chen
Abstract:
Vulnerability discovery technology is a significant field of the current. In this paper, a fuzzy framework based on concolic execution has been proposed. Fuzzy test and symbolic execution are widely used in the field of vulnerability discovery technology. But each of them has its own advantages and disadvantages. During the path generation stage, path traversal algorithm based on generation is used to get more accurate path. During the constraint solving stage, dynamic concolic execution is used to avoid the path explosion. If there is external call, the concolic based on function summary is used. Experiments show that the framework can effectively improve the ability of triggering vulnerabilities and code coverage.Keywords: concolic execution, constraint solving, fuzzy test, vulnerability discovery
Procedia PDF Downloads 2287909 Tuberculosis Massive Active Case Discovery in East Jakarta 2016-2017: The Role of Ketuk Pintu Layani Dengan Hati and Juru Pemantau Batuk (Jumantuk) Cadre Programs
Authors: Ngabilas Salama
Abstract:
Background: Indonesia has the 2nd highest number of incidents of tuberculosis (TB). It accounts for 1.020.000 new cases per year, only 30% of which has been reported. To find the lost 70%, a massive active case discovery was conducted through two programs: Ketuk Pintu Layani Dengan Hati (KPLDH) and Kader Juru Pemantau Batuk (Jumantuk cadres), who also plays a role in child TB screening. Methods: Data was collected and analyzed through Tuberculosis Integrated Online System from 2014 to 2017 involving 129 DOTS facility with 86 primary health centers in East Jakarta. Results: East Jakarta consists of 2.900.722 people. KPLDH program started in February 2016 consisting of 84 teams (310 people). Jumantuk cadres was formed 4 months later (218 orang). The number of new TB cases in East Jakarta (primary health center) from 2014 to June 2017 respectively is as follows: 6.499 (2.637), 7.438 (2.651), 8.948 (3.211), 5.701 (1.830). Meanwhile, the percentage of child TB case discovery in primary health center was 8,5%, 9,8%, 12,1% from 2014 to 2016 respectively. In 2017, child TB case discovery was 13,1% for the first 3 months and 16,5% for the next 3 months. Discussion: Increased TB incidence rate from 2014 to 2017 was 14,4%, 20,3%, and 27,4% respectively in East Jakarta, and 0,5%, 21,1%, and 14% in primary health center. This reveals the positive role of KPLDH and Jumantuk in TB detection and reporting. Likewise, these programs were responsible for the increase in child TB case discovery, especially in the first 3 months of 2017 (Ketuk Pintu TB Day program) and the next 3 months (active TB screening). Conclusion: KPLDH dan Jumantuk are actively involved in increasing TB case discovery in both adults and children.Keywords: tuberculosis, case discovery program, primary health center, cadre
Procedia PDF Downloads 331