Search results for: Object-relational databases
217 An Efficient Data Mining Approach on Compressed Transactions
Authors: Jia-Yu Dai, Don-Lin Yang, Jungpin Wu, Ming-Chuan Hung
Abstract:
In an era of knowledge explosion, the growth of data increases rapidly day by day. Since data storage is a limited resource, how to reduce the data space in the process becomes a challenge issue. Data compression provides a good solution which can lower the required space. Data mining has many useful applications in recent years because it can help users discover interesting knowledge in large databases. However, existing compression algorithms are not appropriate for data mining. In [1, 2], two different approaches were proposed to compress databases and then perform the data mining process. However, they all lack the ability to decompress the data to their original state and improve the data mining performance. In this research a new approach called Mining Merged Transactions with the Quantification Table (M2TQT) was proposed to solve these problems. M2TQT uses the relationship of transactions to merge related transactions and builds a quantification table to prune the candidate itemsets which are impossible to become frequent in order to improve the performance of mining association rules. The experiments show that M2TQT performs better than existing approaches.Keywords: Association rule, data mining, merged transaction, quantification table.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960216 Anomaly Detection with ANN and SVM for Telemedicine Networks
Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos
Abstract:
In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.Keywords: Anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008215 Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure
Authors: Ahmad Fadel Klaib, Zurinahni Zainol, Nurul Hashimah Ahamed, Rosma Ahmad, Wahidah Hussin
Abstract:
Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.
Keywords: Exact String-matching Algorithms, NMRShiftDB, SMILES Format, Antimicrobial Structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2223214 Application of Biometrics to Obtain High Entropy Cryptographic Keys
Authors: Sanjay Kanade, Danielle Camara, Dijana Petrovska-Delacretaz, Bernadette Dorizzi
Abstract:
In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091213 A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks
Authors: Minsoo Lee, Julee Choi, Sookyung Song
Abstract:
The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.Keywords: Aggregation, Incremental View Maintenance, Materialized view, Sensor Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540212 Controlled Vocabularies and Information Retrieval: 1918 Pandemic’s Scientific Literature as an Example
Authors: M. Garcia-Alsina, J. Cobarsí
Abstract:
The role of controlled vocabularies in information retrieval is broadly recognized as a relevant feature. Besides, there is a standing demand that editors and databases should consider the effective introduction of controlled vocabularies in their procedures to index scientific literature. That is especially important because information retrieval is pointed out as a significant point to drive systematic literature review. Hence, a first question emerges: Are the controlled vocabularies at this moment considered? On the other hand, subject searching in the catalogs is complex mainly due to the dichotomy between keywords from authors versus keywords based on controlled vocabularies. Finally, there is some demand to unify the terminology related to health to make easier the medical history exploitation and research. Considering these features, this paper focuses on controlled vocabularies related to the health field and their role for storing, classifying, and retrieving relevant literature. The objective is knowing which role plays the controlled vocabularies related to the health field to index and retrieve research literature in data bases such as Web of Science (WoS) and Scopus. So, this exploratory research is grounded over two research questions: 1) Which are the terms considered in specific controlled vocabularies of the health field; and 2) How papers are indexed in relevant databases to be easily retrieved, considering keywords vs specific health’ controlled vocabularies? This research takes as fieldwork the controlled vocabularies related to health and the scientific interest for 1918 flu pandemic, also known equivocally as ‘Spanish flu’. This interest has been fostered by the emergence in the early 21st of epidemics of pneumonic diseases caused by virus. Searches about and with controlled vocabularies on WoS and Scopus databases are conducted. First results of this work in progress are surprising. There are different controlled vocabularies for the health field, into which the terms collected and preferred related to ‘1918 pandemic’ are identified. To summarize, ‘Spanish influenza epidemic’ or ‘Spanish flu’ are collected as not preferred terms. The preferred terms are: ‘influenza’ or ‘influenza pandemic, 1918-1919’. Although the controlled vocabularies are clear in their election, most of the literature about ‘1918 pandemic’ is retrievable either by ‘Spanish’ or by ‘1918’ disjunct, and the dominant word to retrieve literature is ‘Spanish’ rather than ‘1918’. This is surprising considering the existence of suitable controlled vocabularies related to health topics, and the modern guidelines of World Health Organization concerning naming of diseases that point out to other preferred terms. A first conclusion is the failure of using controlled vocabularies for a field such as health, and in consequence for WoS and Scopus. This research opens further research questions about which is the role that controlled vocabularies play in the instructions to authors that journals deliver to documents’ authors.
Keywords: Controlled vocabularies, indexing, 1918 influenza, information retrieval, keywords, 1918 pandemic, scientific databases.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 427211 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database
Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala
Abstract:
This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.
Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547210 Topographic Arrangement of 3D Design Components on 2D Maps by Unsupervised Feature Extraction
Authors: Stefan Menzel
Abstract:
As a result of the daily workflow in the design development departments of companies, databases containing huge numbers of 3D geometric models are generated. According to the given problem engineers create CAD drawings based on their design ideas and evaluate the performance of the resulting design, e.g. by computational simulations. Usually, new geometries are built either by utilizing and modifying sets of existing components or by adding single newly designed parts to a more complex design. The present paper addresses the two facets of acquiring components from large design databases automatically and providing a reasonable overview of the parts to the engineer. A unified framework based on the topographic non-negative matrix factorization (TNMF) is proposed which solves both aspects simultaneously. First, on a given database meaningful components are extracted into a parts-based representation in an unsupervised manner. Second, the extracted components are organized and visualized on square-lattice 2D maps. It is shown on the example of turbine-like geometries that these maps efficiently provide a wellstructured overview on the database content and, at the same time, define a measure for spatial similarity allowing an easy access and reuse of components in the process of design development.Keywords: Design decomposition, topographic non-negative matrix factorization, parts-based representation, self-organization, unsupervised feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379209 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet
Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia
Abstract:
Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical generic sync middleware of low maintenance and operation costs is most wanted. To this demand, this paper presented a generic sync middleware system (GSMS), which has been developed, applied and optimized since 2006, holding the principles or advantages that it must be SyncML-compliant and transparent to data application layer logic without referring to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence of low cost. Regarding these hard commitments of developing GSMS, in this paper we stressed the significant optimization breakthrough of GSMS sync delay being well below a fraction of millisecond per record sync. A series of ultimate tests with GSMS sync performance were conducted for a persuasive example, in which the source relational database underwent a broad range of write loads (from one thousand to one million intensive writes within a few minutes). All these tests showed that the performance of GSMS is competent and smooth even under ultimate write loads.
Keywords: Heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454208 A Modification of Wireless and Internet Technologies for Logistics- Analysis
Authors: Apiwat Sangnoree
Abstract:
This research is designed for helping a WAPbased mobile phone-s user in order to analyze of logistics in the traffic area by applying and designing the accessible processes from mobile user to server databases. The research-s design comprises Mysql 4.1.8-nt database system for being the server which there are three sub-databases, traffic light – times of intersections in periods of the day, distances on the road of area-blocks where are divided from the main sample-area and speeds of sample vehicles (motorcycle, personal car and truck) in periods of the day. For interconnections between the server and user, PHP is used to calculate distances and travelling times from the beginning point to destination, meanwhile XHTML applied for receiving, sending and displaying data from PHP to user-s mobile. In this research, the main sample-area is focused at the Huakwang-Ratchada-s area, Bangkok, Thailand where usually the congested point and 6.25 km2 surrounding area which are split into 25 blocks, 0.25 km2 for each. For simulating the results, the designed server-database and all communicating models of this research have been uploaded to www.utccengineering.com/m4tg and used the mobile phone which supports WAP 2.0 XHTML/HTML multimode browser for observing values and displayed pictures. According to simulated results, user can check the route-s pictures from the requiring point to destination along with analyzed consuming times when sample vehicles travel in various periods of the day.
Keywords: WAP, logistics, XHTML, internet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438207 Non-Overlapping Hierarchical Index Structure for Similarity Search
Authors: Mounira Taileb, Sid Lamrous, Sami Touati
Abstract:
In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562206 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights
Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan
Abstract:
The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.
Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67205 Spatial-Temporal Clustering Characteristics of Dengue in the Northern Region of Sri Lanka, 2010-2013
Authors: Sumiko Anno, Keiji Imaoka, Takeo Tadono, Tamotsu Igarashi, Subramaniam Sivaganesh, Selvam Kannathasan, Vaithehi Kumaran, Sinnathamby Noble Surendran
Abstract:
Dengue outbreaks are affected by biological, ecological, socio-economic and demographic factors that vary over time and space. These factors have been examined separately and still require systematic clarification. The present study aimed to investigate the spatial-temporal clustering relationships between these factors and dengue outbreaks in the northern region of Sri Lanka. Remote sensing (RS) data gathered from a plurality of satellites were used to develop an index comprising rainfall, humidity and temperature data. RS data gathered by ALOS/AVNIR-2 were used to detect urbanization, and a digital land cover map was used to extract land cover information. Other data on relevant factors and dengue outbreaks were collected through institutions and extant databases. The analyzed RS data and databases were integrated into geographic information systems, enabling temporal analysis, spatial statistical analysis and space-time clustering analysis. Our present results showed that increases in the number of the combination of ecological factor and socio-economic and demographic factors with above the average or the presence contribute to significantly high rates of space-time dengue clusters.
Keywords: ALOS/AVNIR-2, Dengue, Space-time clustering analysis, Sri Lanka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284204 Scientific Production on Lean Supply Chains Published in Journals Indexed by SCOPUS and Web of Science Databases: A Bibliometric Study
Authors: T. Botelho de Sousa, F. Raphael Cabral Furtado, O. Eduardo da Silva Ferri, A. Batista, W. Augusto Varella, C. Eduardo Pinto, J. Mimar Santa Cruz Yabarrena, S. Gibran Ruwer, F. Müller Guerrini, L. Adalberto Philippsen Júnior
Abstract:
Lean Supply Chain Management (LSCM) is an emerging research field in Operations Management (OM). As a strategic model that focuses on reduced cost and waste with fulfilling the needs of customers, LSCM attracts great interest among researchers and practitioners. The purpose of this paper is to present an overview of Lean Supply Chains literature, based on bibliometric analysis through 57 papers published in indexed journals by SCOPUS and/or Web of Science databases. The results indicate that the last three years (2015, 2016, and 2017) were the most productive on LSCM discussion, especially in Supply Chain Management and International Journal of Lean Six Sigma journals. India, USA, and UK are the most productive countries; nevertheless, cross-country studies by collaboration among researchers were detected, by social network analysis, as a research practice, appearing to play a more important role on LSCM studies. Despite existing limitation, such as limited indexed journal database, bibliometric analysis helps to enlighten ongoing efforts on LSCM researches, including most used technical procedures and collaboration network, showing important research gaps, especially, for development countries researchers.
Keywords: Lean supply chains, bibliometric study, SCOPUS, web of Science.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944203 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence
Authors: Francesca Radice
Abstract:
It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.
Keywords: Sentiment Analysis, data mining, predictive policing, virtual manipulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253202 Architecture of Large-Scale Systems
Authors: Arne Koschel, Irina Astrova, Elena Deutschkämer, Jacob Ester, Johannes Feldmann
Abstract:
In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.
Keywords: Distributed file systems, cashing, large scale systems, MapReduce algorithm, NoSQL databases.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3057201 Management of Cultural Heritage: Bologna Gates
Authors: A. Ippolito, C. Bartolomei
Abstract:
A growing demand is felt today for realistic 3D models enabling the cognition and popularization of historical-artistic heritage. Evaluation and preservation of Cultural Heritage is inextricably connected with the innovative processes of gaining, managing, and using knowledge. The development and perfecting of techniques for acquiring and elaborating photorealistic 3D models, made them pivotal elements for popularizing information of objects on the scale of architectonic structures.Keywords: Cultural heritage, databases, non-contact survey, 2D- 3D models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249200 Software Architecture and Support for Patient Tracking Systems in Critical Scenarios
Authors: Gianluca Cornetta, Abdellah Touhafi, David J. Santos, Jose Manuel Vazquez
Abstract:
In this work a new platform for mobile-health systems is presented. System target application is providing decision support to rescue corps or military medical personnel in combat areas. Software architecture relies on a distributed client-server system that manages a wireless ad-hoc networks hierarchy in which several different types of client operate. Each client is characterized for different hardware and software requirements. Lower hierarchy levels rely in a network of completely custom devices that store clinical information and patient status and are designed to form an ad-hoc network operating in the 2.4 GHz ISM band and complying with the IEEE 802.15.4 standard (ZigBee). Medical personnel may interact with such devices, that are called MICs (Medical Information Carriers), by means of a PDA (Personal Digital Assistant) or a MDA (Medical Digital Assistant), and transmit the information stored in their local databases as well as issue a service request to the upper hierarchy levels by using IEEE 802.11 a/b/g standard (WiFi). The server acts as a repository that stores both medical evacuation forms and associated events (e.g., a teleconsulting request). All the actors participating in the diagnostic or evacuation process may access asynchronously to such repository and update its content or generate new events. The designed system pretends to optimise and improve information spreading and flow among all the system components with the aim of improving both diagnostic quality and evacuation process.Keywords: IEEE 802.15.4 (ZigBee), IEEE 802.11 a/b/g (WiFi), distributed client-server systems, embedded databases, issue trackers, ad-hoc networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039199 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.
Keywords: LCA datasets, life cycle assessment, ornamental stone, stone environmental impact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155198 Using Data Clustering in Oral Medicine
Authors: Fahad Shahbaz Khan, Rao Muhammad Anwer, Olof Torgersson
Abstract:
The vast amount of information hidden in huge databases has created tremendous interests in the field of data mining. This paper examines the possibility of using data clustering techniques in oral medicine to identify functional relationships between different attributes and classification of similar patient examinations. Commonly used data clustering algorithms have been reviewed and as a result several interesting results have been gathered.Keywords: Oral Medicine, Cluto, Data Clustering, Data Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977197 A Logic Approach to Database Dynamic Updating
Authors: Daniel Stamate
Abstract:
We introduce a logic-based framework for database updating under constraints. In our framework, the constraints are represented as an instantiated extended logic program. When performing an update, database consistency may be violated. We provide an approach of maintaining database consistency, and study the conditions under which the maintenance process is deterministic. We show that the complexity of the computations and decision problems presented in our framework is in each case polynomial time.Keywords: Databases, knowledge bases, constraints, updates, minimal change, consistency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359196 Enhanced Disk-Based Databases Towards Improved Hybrid In-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable inmemory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of diskbased database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of inmemory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.
Keywords: Concurrency control, disk-based databases, inmemory systems, enhanced memory access (EMA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038195 New Approach for Constructing a Secure Biometric Database
Authors: A. Kebbeb, M. Mostefai, F. Benmerzoug, Y. Chahir
Abstract:
The multimodal biometric identification is the combination of several biometric systems; the challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification.
Keywords: Biometric databases, Multimodal biometrics, security authentication, Digital watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090194 Face Recognition: A Literature Review
Authors: A. S. Tolba, A.H. El-Baz, A.A. El-Harby
Abstract:
The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results.Keywords: Combined classifiers, face recognition, graph matching, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7723193 Power Forecasting of Photovoltaic Generation
Authors: S. H. Oudjana, A. Hellal, I. Hadj Mahammed
Abstract:
Photovoltaic power generation forecasting is an important task in renewable energy power system planning and operating. This paper explores the application of neural networks (NN) to study the design of photovoltaic power generation forecasting systems for one week ahead using weather databases include the global irradiance, and temperature of Ghardaia city (south of Algeria) using a data acquisition system. Simulations were run and the results are discussed showing that neural networks Technique is capable to decrease the photovoltaic power generation forecasting error.Keywords: Photovoltaic Power Forecasting, Regression, Neural Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3765192 Data Mining in Oral Medicine Using Decision Trees
Authors: Fahad Shahbaz Khan, Rao Muhammad Anwer, Olof Torgersson, Göran Falkman
Abstract:
Data mining has been used very frequently to extract hidden information from large databases. This paper suggests the use of decision trees for continuously extracting the clinical reasoning in the form of medical expert-s actions that is inherent in large number of EMRs (Electronic Medical records). In this way the extracted data could be used to teach students of oral medicine a number of orderly processes for dealing with patients who represent with different problems within the practice context over time.Keywords: Data mining, Oral Medicine, Decision Trees, WEKA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2501191 Comparing Arabic and Latin Handwritten Digits Recognition Problems
Authors: Sherif Abdelazeem
Abstract:
A comparison between the performance of Latin and Arabic handwritten digits recognition problems is presented. The performance of ten different classifiers is tested on two similar Arabic and Latin handwritten digits databases. The analysis shows that Arabic handwritten digits recognition problem is easier than that of Latin digits. This is because the interclass difference in case of Latin digits is smaller than in Arabic digits and variances in writing Latin digits are larger. Consequently, weaker yet fast classifiers are expected to play more prominent role in Arabic handwritten digits recognition.Keywords: Handwritten recognition, Arabic recognition, Digits recognition, Document recognition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986190 Discovery of Production Rules with Fuzzy Hierarchy
Authors: Fadl M. Ba-Alwi, Kamal K. Bharadwaj
Abstract:
In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.Keywords: Data Mining, Degree of subsumption, Freq matrix, Fuzzy hierarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312189 A New Approach for the Fingerprint Classification Based On Gray-Level Co- Occurrence Matrix
Authors: Mehran Yazdi, Kazem Gheysari
Abstract:
In this paper, we propose an approach for the classification of fingerprint databases. It is based on the fact that a fingerprint image is composed of regular texture regions that can be successfully represented by co-occurrence matrices. So, we first extract the features based on certain characteristics of the cooccurrence matrix and then we use these features to train a neural network for classifying fingerprints into four common classes. The obtained results compared with the existing approaches demonstrate the superior performance of our proposed approach.
Keywords: Biometrics, fingerprint classification, gray level cooccurrence matrix, regular texture representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966188 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification
Authors: Abdelhadi Lotfi, Abdelkader Benyettou
Abstract:
In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.
Keywords: Classification, probabilistic neural networks, network optimization, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1223