Search results for: Support vector data description
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9250

Search results for: Support vector data description

8140 Evaluation of Hydrogen Particle Volume on Surfaces of Selected Nanocarbons

Authors: M. Ziółkowska, J. T. Duda, J. Milewska-Duda

Abstract:

This paper describes an approach to the adsorption phenomena modeling aimed at specifying the adsorption mechanisms on localized or nonlocalized adsorbent sites, when applied to the nanocarbons. The concept comes from the fundamental thermodynamic description of adsorption equilibrium and is based on numerical calculations of the hydrogen adsorbed particles volume on the surface of selected nanocarbons: single-walled nanotube and nanocone. This approach enables to obtain information on adsorption mechanism and then as a consequence to take appropriate mathematical adsorption model, thus allowing for a more reliable identification of the material porous structure. Theoretical basis of the approach is discussed and newly derived results of the numerical calculations are presented for the selected nanocarbons.

Keywords: Adsorption, mathematical modeling, nanocarbons, numerical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
8139 Latent Topic Based Medical Data Classification

Authors: Jian-hua Yeh, Shi-yi Kuo

Abstract:

This paper discusses the classification process for medical data. In this paper, we use the data from ACM KDDCup 2008 to demonstrate our classification process based on latent topic discovery. In this data set, the target set and outliers are quite different in their nature: target set is only 0.6% size in total, while the outliers consist of 99.4% of the data set. We use this data set as an example to show how we dealt with this extremely biased data set with latent topic discovery and noise reduction techniques. Our experiment faces two major challenge: (1) extremely distributed outliers, and (2) positive samples are far smaller than negative ones. We try to propose a suitable process flow to deal with these issues and get a best AUC result of 0.98.

Keywords: classification, latent topics, outlier adjustment, feature scaling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
8138 Information Dissemination System (IDS) Based E-Learning in Agricultural of Iran (Perception of Iranian Extension Agents)

Authors: A. R. Ommani, M. Chizari

Abstract:

The purpose of the study reported here was designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. A questionnaire was developed to designing Information Dissemination System. The questionnaire was distributed to 96 extension agents who work for Management of Extension and Farming System of Khuzestan province of Iran. Data collected were analyzed using the Statistical Package for the Social Sciences (SPSS). Appropriate statistical procedures for description (frequencies, percent, means, and standard deviations) were used. In this study there was a significant relationship between the age , IT skill and knowledge, years of extension work, the extend of information seeking motivation, level of job satisfaction and level of education with use of information technology by extension agent. According to extension agents five factors were ranked respectively as five top essential items to designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. These factors include: 1) Establish communication between farmers, coordinators (extension agents), agricultural experts, research centers, and community by information technology. 2) The communication between all should be mutual. 3) The information must be based farmers need. 4) Internet used as a facility to transfer the advanced agricultural information to the farming community. 5) Farmers can be illiterate and speak a local and they are not expected to use the system directly. Knowledge produced by the agricultural scientist must be transformed in to computer understandable presentation. To designing Information Dissemination System, electronic communication, in the agricultural society and rural areas must be developed. This communication must be mutual between all factors.

Keywords: E-learning, information dissemination system, information technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
8137 Small Entrepreneurship Supporting Economic Policy in Georgia

Authors: G. Erkomaishvili

Abstract:

This paper discusses small entrepreneurship development strategy in Georgia and the tools and regulations that will encourage development of small entrepreneurship. The current situation in the small entrepreneurship sector, as well as factors affecting growth and decline in the sector and the priorities of state support, are studied and analyzed. The objective of this research is to assess the current situation of the sector to highlight opportunities and reveal the gaps. State support of small entrepreneurship should become a key priority in the country’s economic policy, as development of the sector will ensure social, economic and political stability. Based on the research, a small entrepreneurship development strategy is presented; corresponding conclusions are made and recommendations are developed.

Keywords: Economic policy for small entrepreneurship development, small entrepreneurship, regulations, small entrepreneurship development strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
8136 Data Collection in Hospital Emergencies: A Questionnaire Survey

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

Many methods are used to collect data like questionnaires, surveys, focus group interviews. Or the collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses. In this context, and to overcome the aforementioned problems, we suggest in this paper an approach to achieve the collection of relevant data, by carrying out a large-scale questionnaire-based survey. We have been able to collect good quality, consistent and practical data on hospital emergencies to improve emergency services in hospitals, especially in the case of epidemics or pandemics.

Keywords: Data collection, survey, database, data analysis, hospital emergencies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
8135 Design of MBMS Client Functions in the Mobile

Authors: Jaewook Shin, Aesoon Park

Abstract:

MBMS is a unidirectional point-to-multipoint bearer service in which data are transmitted from a single source entity to multiple recipients. For a mobile to support the MBMS, MBMS client functions as well as MBMS radio protocols should be designed and implemented. In this paper, we analyze the MBMS client functions and describe the implementation of them in our mobile test-bed. User operations and signaling flows between protocol entities to control the MBMS functions are designed in detail. Service announcement utilizing the file download MBMS service and four MBMS user services are demonstrated in the test-bed to verify the MBMS client functions.

Keywords: BM-SC, Broadcast, MBMS, Mobile, Multicast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
8134 Mineralogical Characterization and Petrographic Classification of the Soil of Casablanca City

Authors: I. Fahi, T. Remmal, F. El Kamel, B. Ayoub

Abstract:

The treatment of the geotechnical database of the region of Casablanca was difficult to achieve due to the heterogeneity of the nomenclature of the lithological formations composing its soil. It appears necessary to harmonize the nomenclature of the facies and to produce cartographic documents useful for construction projects and studies before any investment program. To achieve this, more than 600 surveys made by the Public Laboratory for Testing and Studies (LPEE) in the agglomeration of Casablanca, were studied. Moreover, some local observations were made in different places of the metropolis. Each survey was the subject of a sheet containing lithological succession, macro and microscopic description of petrographic facies with photographic illustration, as well as measurements of geomechanical tests. In addition, an X-ray diffraction analysis was made in order to characterize the surficial formations of the region.

Keywords: Casablanca, guidebook, petrography, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
8133 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: Sensor, electricity sub-meters, database, energy anomaly detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
8132 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
8131 Extraction of Data from Web Pages: A Vision Based Approach

Authors: P. S. Hiremath, Siddu P. Algur

Abstract:

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Keywords: Web data records, web data regions, web mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
8130 Visual-Graphical Methods for Exploring Longitudinal Data

Authors: H. W. Ker

Abstract:

Longitudinal data typically have the characteristics of changes over time, nonlinear growth patterns, between-subjects variability, and the within errors exhibiting heteroscedasticity and dependence. The data exploration is more complicated than that of cross-sectional data. The purpose of this paper is to organize/integrate of various visual-graphical techniques to explore longitudinal data. From the application of the proposed methods, investigators can answer the research questions include characterizing or describing the growth patterns at both group and individual level, identifying the time points where important changes occur and unusual subjects, selecting suitable statistical models, and suggesting possible within-error variance.

Keywords: Data exploration, exploratory analysis, HLMs/LMEs, longitudinal data, visual-graphical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2094
8129 Estimation of Relative Self-Localization Based On Natural Landmark and an Improved SURF

Authors: Xing Xiong, Byung-Jae Choi

Abstract:

It is important for an autonomous mobile robot to know where it is in any time in an indoor environment. In this paper, we design a relative self-localization algorithm. The algorithm compare the interest point in two images and compute the relative displacement and orientation to determent the posture. Firstly, we use the SURF algorithm to extract the interest points of the ceiling. Second, in order to reduce amount of calculation, a replacement SURF is used to extract orientation and description of the interest points. At last, according to the transformation of the interest points in two images, the relative self-localization of the mobile robot will be estimated greatly.

Keywords: Relative Self-Localization Posture, SURF, Natural Landmark, Interest Point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
8128 Synchronization of Non-Identical Chaotic Systems with Different Orders Based On Vector Norms Approach

Authors: Rihab Gam, Anis Sakly, Faouzi M'sahli

Abstract:

A new strategy of control is formulated for chaos synchronization of non-identical chaotic systems with different orders using the Borne and Gentina practical criterion associated with the Benrejeb canonical arrow form matrix, to drift the stability property of dynamic complex systems. The designed controller ensures that the state variables of controlled chaotic slave systems globally synchronize with the state variables of the master systems, respectively. Numerical simulations are performed to illustrate the efficiency of the proposed method.

Keywords: Synchronization, Non-identical chaotic systems, Different orders, Arrow form matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
8127 A Materialized Approach to the Integration of XML Documents: the OSIX System

Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet

Abstract:

The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.

Keywords: Data integration, semi-structured data, views, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
8126 Dengue Transmission Model between Infantand Pregnant Woman with Antibody

Authors: R. Kongnuy, P. Pongsumpun

Abstract:

Dengue, a disease found in most tropical and subtropical areas of the world. It has become the most common arboviral disease of humans. This disease is caused by any of four serotypes of dengue virus (DEN1-DEN4). In many endemic countries, the average age of getting dengue infection is shifting upwards, dengue in pregnancy and infancy are likely to be encountered more frequently. The dynamics of the disease is studied by a compartmental model involving ordinary differential equations for the pregnant, infant human and the vector populations. The stability of each equilibrium point is given. The epidemic dynamic is discussed. Moreover, the numerical results are shown for difference values of dengue antibody.

Keywords: Dengue antibody, infant, pregnant human, mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
8125 An Effective Algorithm for Minimum Weighted Vertex Cover Problem

Authors: S. Balaji, V. Swaminathan, K. Kannan

Abstract:

The Minimum Weighted Vertex Cover (MWVC) problem is a classic graph optimization NP - complete problem. Given an undirected graph G = (V, E) and weighting function defined on the vertex set, the minimum weighted vertex cover problem is to find a vertex set S V whose total weight is minimum subject to every edge of G has at least one end point in S. In this paper an effective algorithm, called Support Ratio Algorithm (SRA), is designed to find the minimum weighted vertex cover of a graph. Computational experiments are designed and conducted to study the performance of our proposed algorithm. Extensive simulation results show that the SRA can yield better solutions than other existing algorithms found in the literature for solving the minimum vertex cover problem.

Keywords: Weighted vertex cover, vertex support, approximation algorithms, NP-complete problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3882
8124 Exterior Calculus: Economic Profit Dynamics

Authors: Troy L. Story

Abstract:

A mathematical model for the Dynamics of Economic Profit is constructed by proposing a characteristic differential oneform for this dynamics (analogous to the action in Hamiltonian dynamics). After processing this form with exterior calculus, a pair of characteristic differential equations is generated and solved for the rate of change of profit P as a function of revenue R (t) and cost C (t). By contracting the characteristic differential one-form with a vortex vector, the Lagrangian is obtained for the Dynamics of Economic Profit.

Keywords: Differential geometry, exterior calculus, Hamiltonian geometry, mathematical economics, economic functions, and dynamics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2538
8123 Hydrochemical Contamination Profiling and Spatial-Temporal Mapping with the Support of Multivariate and Cluster Statistical Analysis

Authors: S. Barbosa, M. Pinto, J. A. Almeida, E. Carvalho, C. Diamantino

Abstract:

The aim of this work was to test a methodology able to generate spatial-temporal maps that can synthesize simultaneously the trends of distinct hydrochemical indicators in an old radium-uranium tailings dam deposit. Multidimensionality reduction derived from principal component analysis and subsequent data aggregation derived from clustering analysis allow to identify distinct hydrochemical behavioral profiles and generate synthetic evolutionary hydrochemical maps.

Keywords: Contamination plume migration, K-means of PCA scores, groundwater and mine water monitoring, spatial-temporal hydrochemical trends.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
8122 Structural Engineering Forensic Evaluation of Misdiagnosed Concrete Masonry Wall Cracking

Authors: W. C. Bracken

Abstract:

Given that concrete masonry walls are expected to experience shrinkage combined with thermal expansion and contraction, and in some cases even carbonation, throughout their service life, cracking is to be expected. However, after concrete masonry walls have been placed into service, originally anticipated and accounted for cracking is often misdiagnosed as a structural defect. Such misdiagnoses often result in or are used to support litigation. This paper begins by discussing the causes and types of anticipated cracking within concrete masonry walls followed by a discussion on the processes and analyses that exists for properly evaluating them and their significance. From here, the paper then presents a case of misdiagnosed concrete masonry cracking and the flawed logic employed to support litigation.

Keywords: Concrete masonry, masonry wall cracking, structural defect, structural damage, construction defect, forensic investigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
8121 Enhancing Performance of Bluetooth Piconets Using Priority Scheduling and Exponential Back-Off Mechanism

Authors: Dharmendra Chourishi “Maitraya”, Sridevi Seshadri

Abstract:

Bluetooth is a personal wireless communication technology and is being applied in many scenarios. It is an emerging standard for short range, low cost, low power wireless access technology. Current existing MAC (Medium Access Control) scheduling schemes only provide best-effort service for all masterslave connections. It is very challenging to provide QoS (Quality of Service) support for different connections due to the feature of Master Driven TDD (Time Division Duplex). However, there is no solution available to support both delay and bandwidth guarantees required by real time applications. This paper addresses the issue of how to enhance QoS support in a Bluetooth piconet. The Bluetooth specification proposes a Round Robin scheduler as possible solution for scheduling the transmissions in a Bluetooth Piconet. We propose an algorithm which will reduce the bandwidth waste and enhance the efficiency of network. We define token counters to estimate traffic of real-time slaves. To increase bandwidth utilization, a back-off mechanism is then presented for best-effort slaves to decrease the frequency of polling idle slaves. Simulation results demonstrate that our scheme achieves better performance over the Round Robin scheduling.

Keywords: Piconet, Medium Access Control, Polling algorithm, Scheduling, QoS, Time Division Duplex (TDD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
8120 Agreement Options on Multi Criteria Group Decision and Negotiation

Authors: Christiono Utomo, Arazi Idrus, Madzlan Napiah, Mohd. Faris Khamidi

Abstract:

This paper presents a conceptual model of agreement options on negotiation support for civil engineering decision. The negotiation support facilitates the solving of group choice decision making problems in civil engineering decision to reduce the impact of mud volcano disaster in Sidoarjo, Indonesia. The approach based on application of analytical hierarchy process (AHP) method for multi criteria decision on three level of decision hierarchy. Decisions for reducing impact is very complicated since many parties involved in a critical time. Where a number of stakeholders are involved in choosing a single alternative from a set of solution alternatives, there are different concern caused by differing stakeholder preferences, experiences, and background. Therefore, a group choice decision support is required to enable each stakeholder to evaluate and rank the solution alternatives before engaging into negotiation with the other stakeholders. Such civil engineering solutions as alternatives are referred to as agreement options that are determined by identifying the possible stakeholder choice, followed by determining the optimal solution for each group of stakeholder. Determination of the optimal solution is based on a game theory model of n-person general sum game with complete information that involves forming coalitions among stakeholders.

Keywords: Agreement options, AHP, agent, negotiation, multicriteria, game theory, and coalition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
8119 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.

Keywords: Agriculture 4.0, agri-food supply chain, Industry 4.0, voluntary traceability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348
8118 Data-Driven Decision-Making in Digital Entrepreneurship

Authors: Abeba Nigussie Turi, Xiangming Samuel Li

Abstract:

Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.

Keywords: Startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
8117 Classifying Bio-Chip Data using an Ant Colony System Algorithm

Authors: Minsoo Lee, Yearn Jeong Kim, Yun-mi Kim, Sujeung Cheong, Sookyung Song

Abstract:

Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.

Keywords: Ant Colony System, DNA chip data, Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468
8116 CLASS, A New Tool for Nuclear Scenarios: Description and First Application

Authors: B. Mouginot, J.B. Clavel, N Thiolliere

Abstract:

The presented work is motivated by a french law regarding nuclear waste management. In order to avoid the limitation coming with the usage of the existing scenario codes, as COSI, VISION or FAMILY, the Core Library for Advance Scenario Simulation (CLASS) is being develop. CLASS is an open source tool, which allows any user to simulate an electronuclear scenario. The main CLASS asset, is the possibility to include any type of reactor, even a complitely new concept, through the generation of its ACSII evolution database. In the present article, the CLASS working basis will be presented as well as a simple exemple in order to show his potentiel. In the considered exemple, the effect of the transmutation will be assessed on Minor Actinide Inventory produced by PWR reactors.

Keywords: Electronuclear scenario, reactor, simulation, nuclear waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
8115 SDVAR Algorithm for Detecting Fraud in Telecommunications

Authors: Fatimah Almah Saaid, Darfiana Nur, Robert King

Abstract:

This paper presents a procedure for estimating VAR using Sequential Discounting VAR (SDVAR) algorithm for online model learning to detect fraudulent acts using the telecommunications call detailed records (CDR). The volatility of the VAR is observed allowing for non-linearity, outliers and change points based on the works of [1]. This paper extends their procedure from univariate to multivariate time series. A simulation and a case study for detecting telecommunications fraud using CDR illustrate the use of the algorithm in the bivariate setting.

Keywords: Telecommunications Fraud, SDVAR Algorithm, Multivariate time series, Vector Autoregressive, Change points.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2256
8114 Trust and Reliability for Public Sector Data

Authors: Klaus Stranacher, Vesna Krnjic, Thomas Zefferer

Abstract:

The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.

Keywords: Trusted Public Sector Data, Integrity, Authenticity, Reliability, Redactable Signatures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
8113 The CommonSense Platform for Conducting Multiple Participant Field-Experiments Using Mobile-Phones

Authors: Y. Hoffner, Y. Rusho, S. Rubach, S. Abargil

Abstract:

This paper presents CommonSense, a platform that provides researchers with the infrastructure and tools that enable the efficient and smooth creation, execution and processing of multiple participant experiments taking place outside the laboratory environment. The platform provides the infrastructure and tools to accompany the researchers throughout the life cycle of an experiment – from its inception, through its execution, to its processing and termination. The approach of our platform is based on providing a comprehensive solution, which puts emphasis on the support for the entire life-cycle of an experiment, starting from its definition, the setting up and the configuration of the platform, through the management of the experiment itself and its post processing. Some of the components that support those processes are constructed and configured automatically from the experiment definition.

Keywords: Mobile applications, mobile experiments, web experiments, software system architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 431
8112 Bitrate Reduction Using FMO for Video Streaming over Packet Networks

Authors: Le Thanh Ha, Hye-Soo Kim, Chun-Su Park, Seung-Won Jung, Sung-Jea Ko

Abstract:

Flexible macroblock ordering (FMO), adopted in the H.264 standard, allows to partition all macroblocks (MBs) in a frame into separate groups of MBs called Slice Groups (SGs). FMO can not only support error-resilience, but also control the size of video packets for different network types. However, it is well-known that the number of bits required for encoding the frame is increased by adopting FMO. In this paper, we propose a novel algorithm that can reduce the bitrate overhead caused by utilizing FMO. In the proposed algorithm, all MBs are grouped in SGs based on the similarity of the transform coefficients. Experimental results show that our algorithm can reduce the bitrate as compared with conventional FMO.

Keywords: Data Partition, Entropy Coding, Greedy Algorithm, H.264/AVC, Slice Group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
8111 EnArgus: A Knowledge-Based Search Application for Energy Research Projects

Authors: Frederike Ohrem, Lukas Sikorski, Bastian Haarmann

Abstract:

Often the users of a semantic search application are facing the problem that they do not find appropriate terms for their search. This holds especially if the data to be searched is from a technical field in which the user does not have expertise. In order to support the user finding the results he seeks, we developed a domain-specific ontology and implemented it into a search application. The ontology serves as a knowledge base, suggesting technical terms to the user which he can add to his query. In this paper, we present the search application and the underlying ontology as well as the project EnArgus in which the application was developed.

Keywords: Information system, knowledge representation, ontology, semantic search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729