Search results for: Data Base
7679 Tipover Stability Enhancement of Wheeled Mobile Manipulators Using an Adaptive Neuro- Fuzzy Inference Controller System
Authors: A. Ghaffari, A. Meghdari, D. Naderi, S. Eslami
Abstract:
In this paper an algorithm based on the adaptive neuro-fuzzy controller is provided to enhance the tipover stability of mobile manipulators when they are subjected to predefined trajectories for the end-effector and the vehicle. The controller creates proper configurations for the manipulator to prevent the robot from being overturned. The optimal configuration and thus the most favorable control are obtained through soft computing approaches including a combination of genetic algorithm, neural networks, and fuzzy logic. The proposed algorithm, in this paper, is that a look-up table is designed by employing the obtained values from the genetic algorithm in order to minimize the performance index and by using this data base, rule bases are designed for the ANFIS controller and will be exerted on the actuators to enhance the tipover stability of the mobile manipulator. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.Keywords: Mobile Manipulator, Tipover Stability Enhancement, Adaptive Neuro-Fuzzy Inference Controller System, Soft Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19637678 Energy Supply, Demand and Environmental Analysis – A Case Study of Indian Energy Scenario
Authors: I.V. Saradhi, G.G. Pandit, V.D. Puranik
Abstract:
Increasing concerns over climate change have limited the liberal usage of available energy technology options. India faces a formidable challenge to meet its energy needs and provide adequate energy of desired quality in various forms to users in sustainable manner at reasonable costs. In this paper, work carried out with an objective to study the role of various energy technology options under different scenarios namely base line scenario, high nuclear scenario, high renewable scenario, low growth and high growth rate scenario. The study has been carried out using Model for Energy Supply Strategy Alternatives and their General Environmental Impacts (MESSAGE) model which evaluates the alternative energy supply strategies with user defined constraints on fuel availability, environmental regulations etc. The projected electricity demand, at the end of study period i.e. 2035 is 500490 MWYr. The model predicted the share of the demand by Thermal: 428170 MWYr, Hydro: 40320 MWYr, Nuclear: 14000 MWYr, Wind: 18000 MWYr in the base line scenario. Coal remains the dominant fuel for production of electricity during the study period. However, the import dependency of coal increased during the study period. In baseline scenario the cumulative carbon dioxide emissions upto 2035 are about 11,000 million tones of CO2. In the scenario of high nuclear capacity the carbon dioxide emissions reduced by 10 % when nuclear energy share increased to 9 % compared to 3 % in baseline scenario. Similarly aggressive use of renewables reduces 4 % of carbon dioxide emissions.Keywords: Carbon dioxide, energy, electricity, message.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27627677 Optimization of Design Parameters for Wire Mesh Fin Arrays as a Heat Sink Using Taguchi Method
Authors: Kavita H. Dhanawade, Hanamant S. Dhanawade
Abstract:
Heat transfer enhancement objects like extended surfaces, fins etc. are chosen for their thermal performance as well as for other design parameters depending on various applications. The present paper is on experimental study to investigate the heat transfer enhancement through wire mesh fin arrays equipped with horizontal base plate. The data used in performance analysis were obtained experimentally for the material (mild steel) for different heat inputs such as 40, 60, 80, 100 and 120 watt, by varying wire mesh diameter, fin height and spacing between two fin arrays. Using the Taguchi experimental design method, optimum design parameters and their levels were investigated. Average heat transfer coefficient was considered as a performance characteristic parameter. An L9 (33) orthogonal array was selected as an experimental plan. Optimum results were found by experimenting. It is observed that the wire mesh diameter and fin height have a higher impact on heat transfer coefficient as compared to spacing between two fin arrays.Keywords: Heat transfer enhancement, finned surface, wire mesh diameter, natural convection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8137676 The Impact of System and Data Quality on Organizational Success in the Kingdom of Bahrain
Authors: Amal M. Alrayes
Abstract:
Data and system quality play a central role in organizational success, and the quality of any existing information system has a major influence on the effectiveness of overall system performance. Given the importance of system and data quality to an organization, it is relevant to highlight their importance on organizational performance in the Kingdom of Bahrain. This research aims to discover whether system quality and data quality are related, and to study the impact of system and data quality on organizational success. A theoretical model based on previous research is used to show the relationship between data and system quality, and organizational impact. We hypothesize, first, that system quality is positively associated with organizational impact, secondly that system quality is positively associated with data quality, and finally that data quality is positively associated with organizational impact. A questionnaire was conducted among public and private organizations in the Kingdom of Bahrain. The results show that there is a strong association between data and system quality, that affects organizational success.Keywords: Data quality, performance, system quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21187675 Integration of Multi-Source Data to Monitor Coral Biodiversity
Authors: K. Jitkue, W. Srisang, C. Yaiprasert, K. Jaroensutasinee, M. Jaroensutasinee
Abstract:
This study aims at using multi-source data to monitor coral biodiversity and coral bleaching. We used coral reef at Racha Islands, Phuket as a study area. There were three sources of data: coral diversity, sensor based data and satellite data.Keywords: Coral reefs, Remote sensing, Sea surfacetemperatue, Satellite imagery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15537674 Decision Support System Based on Data Warehouse
Authors: Yang Bao, LuJing Zhang
Abstract:
Typical Intelligent Decision Support System is 4-based, its design composes of Data Warehouse, Online Analytical Processing, Data Mining and Decision Supporting based on models, which is called Decision Support System Based on Data Warehouse (DSSBDW). This way takes ETL,OLAP and DM as its implementing means, and integrates traditional model-driving DSS and data-driving DSS into a whole. For this kind of problem, this paper analyzes the DSSBDW architecture and DW model, and discusses the following key issues: ETL designing and Realization; metadata managing technology using XML; SQL implementing, optimizing performance, data mapping in OLAP; lastly, it illustrates the designing principle and method of DW in DSSBDW.
Keywords: Decision Support System, Data Warehouse, Data Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38627673 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions
Authors: Guneet Saini, Shahrukh, Sunil Sharma
Abstract:
Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.
Keywords: Operational performance, roundabout, simulation, VISSIM, traffic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7827672 Studies on Lucrative Process Layout for Medium Scale Industries
Authors: Balamurugan Baladhandapani, Ganesh Renganathan, V. R. Sanal Kumar
Abstract:
In this paper a comprehensive review on various factory layouts has been carried out for designing a lucrative process layout for medium scale industries. Industry data base reveals that the end product rejection rate is on the order of 10% amounting large profit loss. In order to avoid these rejection rates and to increase the quality product production an intermediate non-destructive testing facility (INDTF) has been recommended for increasing the overall profit. We observed through detailed case studies that while introducing INDTF to medium scale industries the expensive production process can be avoided to the defective products well before its final shape. Additionally, the defective products identified during the intermediate stage can be effectively utilized for other applications or recycling; thereby the overall wastage of the raw materials can be reduced and profit can be increased. We concluded that the prudent design of a factory layout through critical path method facilitating with INDTF will warrant profitable outcome.
Keywords: Intermediate Non-destructive testing, Medium scale industries, Process layout design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23967671 Numerical Modeling of Artisanal and Small-Scale Mining of Coltan in the African Great Lakes Region
Authors: Sergio Perez Rodriguez
Abstract:
Findings of a production model of Artisanal and Small-Scale Mining (ASM) of coltan ore by an average Democratic Republic of Congo (DRC) mineworker are presented in this paper. These can be used as a reference for a similar characterization of the daily labor of counterparts from other countries in the Africa's Great Lakes region. To that end, the Fundamental Equation of Mineral Production has been applied in this paper, considering a miner's average daily output of coltan, estimated in the base of gross statistical data gathered from reputable sources. Results indicate daily yields of individual miners in the order of 300 g of coltan ore, with hourly peaks of production in the range of 30 to 40 g of the mineral. Yields are expected to be in the order of 5 g or less during the least productive hours. These outputs are expected to be achieved during the halves of the eight to 10 hours of daily working sessions that these artisanal laborers can attend during the mining season.
Keywords: Coltan, mineral production, Production to Reserve ratio, artisanal mining, small-scale mining, ASM, human work, Great Lakes region, Democratic Republic of Congo.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947670 A New History Based Method to Handle the Recurring Concept Shifts in Data Streams
Authors: Hossein Morshedlou, Ahmad Abdollahzade Barforoush
Abstract:
Recent developments in storage technology and networking architectures have made it possible for broad areas of applications to rely on data streams for quick response and accurate decision making. Data streams are generated from events of real world so existence of associations, which are among the occurrence of these events in real world, among concepts of data streams is logical. Extraction of these hidden associations can be useful for prediction of subsequent concepts in concept shifting data streams. In this paper we present a new method for learning association among concepts of data stream and prediction of what the next concept will be. Knowing the next concept, an informed update of data model will be possible. The results of conducted experiments show that the proposed method is proper for classification of concept shifting data streams.Keywords: Data Stream, Classification, Concept Shift, History.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12787669 Incremental Learning of Independent Topic Analysis
Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda
Abstract:
In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.Keywords: Text mining, topic extraction, independent, incremental, independent component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10587668 A Framework for Data Mining Based Multi-Agent: An Application to Spatial Data
Authors: H. Baazaoui Zghal, S. Faiz, H. Ben Ghezala
Abstract:
Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Keywords: Databases, data mining, multi-agent, spatial datamart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20457667 Latent Topic Based Medical Data Classification
Authors: Jian-hua Yeh, Shi-yi Kuo
Abstract:
This paper discusses the classification process for medical data. In this paper, we use the data from ACM KDDCup 2008 to demonstrate our classification process based on latent topic discovery. In this data set, the target set and outliers are quite different in their nature: target set is only 0.6% size in total, while the outliers consist of 99.4% of the data set. We use this data set as an example to show how we dealt with this extremely biased data set with latent topic discovery and noise reduction techniques. Our experiment faces two major challenge: (1) extremely distributed outliers, and (2) positive samples are far smaller than negative ones. We try to propose a suitable process flow to deal with these issues and get a best AUC result of 0.98.
Keywords: classification, latent topics, outlier adjustment, feature scaling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16427666 Data Collection in Hospital Emergencies: A Questionnaire Survey
Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala
Abstract:
Many methods are used to collect data like questionnaires, surveys, focus group interviews. Or the collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses. In this context, and to overcome the aforementioned problems, we suggest in this paper an approach to achieve the collection of relevant data, by carrying out a large-scale questionnaire-based survey. We have been able to collect good quality, consistent and practical data on hospital emergencies to improve emergency services in hospitals, especially in the case of epidemics or pandemics.
Keywords: Data collection, survey, database, data analysis, hospital emergencies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6677665 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data
Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop
Abstract:
Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20387664 Extraction of Data from Web Pages: A Vision Based Approach
Authors: P. S. Hiremath, Siddu P. Algur
Abstract:
With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.
Keywords: Web data records, web data regions, web mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19017663 Visual-Graphical Methods for Exploring Longitudinal Data
Authors: H. W. Ker
Abstract:
Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20947662 A Materialized Approach to the Integration of XML Documents: the OSIX System
Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet
Abstract:
The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.Keywords: Data integration, semi-structured data, views, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15907661 Solar Energy Generation Based Urban Development: A Case of Jodhpur City
Authors: A. Kumar, V. Devadas
Abstract:
India has the most year-round favorable sunny conditions along with the second-highest solar irradiation in the world, the country holds the potential to become the global solar hub. The solar and wind-based generation capacity has skyrocketed in India with the successful effort of the Ministry of Renewable Energy, whereas the potential of rooftop based solar power generation has yet to be explored for proposed solar cities in India. The research aims to analyze the gap in the energy scenario in Jodhpur City and proposes interventions of solar energy generation systems as a catalyst for urban development. The research is based on the system concept which deals with simulation between the city system as a whole and its interactions between different subsystems. A system-dynamics based mathematical model is developed by identifying the control parameters using regression and correlation analysis to assess the gap in energy sector. The base model validation is done using the past 10 years timeline data collected from secondary sources. Further, energy consumption and solar energy generation-based projection are made for testing different scenarios to conclude the feasibility for maintaining the city level energy independence till 2031.
Keywords: City, consumption, energy, generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5637660 Data-Driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.
Keywords: Startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8277659 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.
Keywords: Base station, clustering algorithm, energy efficient, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8447658 Classifying Bio-Chip Data using an Ant Colony System Algorithm
Authors: Minsoo Lee, Yearn Jeong Kim, Yun-mi Kim, Sujeung Cheong, Sookyung Song
Abstract:
Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.Keywords: Ant Colony System, DNA chip data, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14687657 Trust and Reliability for Public Sector Data
Authors: Klaus Stranacher, Vesna Krnjic, Thomas Zefferer
Abstract:
The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.Keywords: Trusted Public Sector Data, Integrity, Authenticity, Reliability, Redactable Signatures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17587656 Interoperability in Component Based Software Development
Authors: M. Madiajagan, B. Vijayakumar
Abstract:
The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Keywords: Interoperability, component packaging, communication technology, heterogeneous platform, component interface, middleware.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27877655 Analysis of Relation between Unlabeled and Labeled Data to Self-Taught Learning Performance
Authors: Ekachai Phaisangittisagul, Rapeepol Chongprachawat
Abstract:
Obtaining labeled data in supervised learning is often difficult and expensive, and thus the trained learning algorithm tends to be overfitting due to small number of training data. As a result, some researchers have focused on using unlabeled data which may not necessary to follow the same generative distribution as the labeled data to construct a high-level feature for improving performance on supervised learning tasks. In this paper, we investigate the impact of the relationship between unlabeled and labeled data for classification performance. Specifically, we will apply difference unlabeled data which have different degrees of relation to the labeled data for handwritten digit classification task based on MNIST dataset. Our experimental results show that the higher the degree of relation between unlabeled and labeled data, the better the classification performance. Although the unlabeled data that is completely from different generative distribution to the labeled data provides the lowest classification performance, we still achieve high classification performance. This leads to expanding the applicability of the supervised learning algorithms using unsupervised learning.Keywords: Autoencoder, high-level feature, MNIST dataset, selftaught learning, supervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18327654 Soil/Phytofisionomy Relationship in Southeast of Chapada Diamantina, Bahia, Brazil
Authors: Marcelo Araujo da Nóbrega, Ariel Moura Vilas Boas
Abstract:
This study aims to characterize the physicochemical aspects of the soils of southeastern Chapada Diamantina - Bahia related to the phytophysiognomies of this area, rupestrian field, small savanna (savanna fields), small dense savanna (savanna fields), savanna (Cerrado), dry thorny forest (Caatinga), dry thorny forest/savanna, scrub (Carrasco - ecotone), forest island (seasonal semi-deciduous forest - Capão) and seasonal semi-deciduous forest. To achieve the research objective, soil samples were collected in each plant formation and analyzed in the soil laboratory of ESALQ - USP in order to identify soil fertility through the determination of pH, organic matter, phosphorus, potassium, calcium, magnesium, potential acidity, sum of bases, cation exchange capacity and base saturation. The composition of soil particles was also checked; that is, the texture, step made in the terrestrial ecosystems laboratory of the Department of Ecology of USP and in the soil laboratory of ESALQ. Another important factor also studied was to show the variations in the vegetation cover in the region as a function of soil moisture in the different existing physiographic environments. Another study carried out was a comparison between the average soil moisture data with precipitation data from three locations with very different phytophysiognomies. The soils found in this part of Bahia can be classified into 5 classes, with a predominance of oxisols. All of these classes have a great diversity of physical and chemical properties, as can be seen in photographs and in particle size and fertility analyzes. The deepest soils are located in the Central Pediplano of Chapada Diamantina where the dirty field, the clean field, the executioner and the semideciduous seasonal forest (Capão) are located, and the shallower soils were found in the rupestrian field, dry thorny forest, and savanna fields, the latter located on a hillside. As for the variations in water in the region's soil, the data indicate that there were large spatial variations in humidity in both the rainy and dry periods.
Keywords: Bahia, Chapada diamantina, phytophysiognomies, soils.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5817653 An Empirical Analysis of the Impact of Selected Macroeconomic Variables on Capital Formation in Libya (1970–2010)
Authors: Khaled Ramadan Elbeydi
Abstract:
This study is carried out to provide an insight into the analysis of the impact of selected macro-economic variables on gross fixed capital formation in Libya using annual data over the period (1970-2010). The importance of this study comes from the ability to show the relative important factors that impact the Libyan gross fixed capital formation. This understanding would give indications to decision makers on which policy they must focus to stimulate the economy. An Autoregressive Distributed Lag (ARDL) modeling process is employed to investigate the impact of the Gross Domestic Product, Monetary Base and Trade Openness on Gross Fixed Capital Formation in Libya. The results of this study reveal that there is an equilibrium relationship between capital formation and its determinants. The results also indicate that GDP and trade openness largely explain the pattern of capital formation in Libya. The findings and recommendations provide vital information relevant for policy formulation and implementation aimed to improve capital formation in Libya.
Keywords: ARDL, Bounds test, capital formation, Cointegration, Libya.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17287652 Towards Development of Solution for Business Process-Oriented Data Analysis
Authors: M. Klimavicius
Abstract:
This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.Keywords: Data warehouse, data analysis, business processmanagement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13927651 Preliminary Overview of Data Mining Technology for Knowledge Management System in Institutions of Higher Learning
Authors: Muslihah Wook, Zawiyah M. Yusof, Mohd Zakree Ahmad Nazri
Abstract:
Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.
Keywords: Data mining, Institutions of Higher Learning, Knowledge Management System, Students' academic performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21427650 Towards a Secure Storage in Cloud Computing
Authors: Mohamed Elkholy, Ahmed Elfatatry
Abstract:
Cloud computing has emerged as a flexible computing paradigm that reshaped the Information Technology map. However, cloud computing brought about a number of security challenges as a result of the physical distribution of computational resources and the limited control that users have over the physical storage. This situation raises many security challenges for data integrity and confidentiality as well as authentication and access control. This work proposes a security mechanism for data integrity that allows a data owner to be aware of any modification that takes place to his data. The data integrity mechanism is integrated with an extended Kerberos authentication that ensures authorized access control. The proposed mechanism protects data confidentiality even if data are stored on an untrusted storage. The proposed mechanism has been evaluated against different types of attacks and proved its efficiency to protect cloud data storage from different malicious attacks.Keywords: Access control, data integrity, data confidentiality, Kerberos authentication, cloud security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771