Search results for: Data specification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7499

Search results for: Data specification

7439 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
7438 A Discrete-Event-Simulation Approach for Logistic Systems with Real Time Resource Routing and VR Integration

Authors: Gerrit Alves, Jürgen Roßmann, Roland Wischnewski

Abstract:

Today, transport and logistic systems are often tightly integrated in the production. Lean production and just-in-time delivering create multiple constraints that have to be fulfilled. As transport networks often have evolved over time they are very expensive to change. This paper describes a discrete-event-simulation system which simulates transportation models using real time resource routing and collision avoidance. It allows for the specification of own control algorithms and validation of new strategies. The simulation is integrated into a virtual reality (VR) environment and can be displayed in 3-D to show the progress. Simulation elements can be selected through VR metaphors. All data gathered during the simulation can be presented as a detailed summary afterwards. The included cost-benefit calculation can help to optimize the financial outcome. The operation of this approach is shown by the example of a timber harvest simulation.

Keywords: Discrete-Event-Simulation, Logistic, Simulation, Virtual Reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
7437 Adaptation of State/Transition-Based Methods for Embedded System Testing

Authors: Abdelaziz Guerrouat, Harald Richter

Abstract:

In this paper test generation methods and appropriate fault models for testing and analysis of embedded systems described as (extended) finite state machines ((E)FSMs) are presented. Compared to simple FSMs, EFSMs specify not only the control flow but also the data flow. Thus, we define a two-level fault model to cover both aspects. The goal of this paper is to reuse well-known FSM-based test generation methods for automation of embedded system testing. These methods have been widely used in testing and validation of protocols and communicating systems. In particular, (E)FSMs-based specification and testing is more advantageous because (E)FSMs support the formal semantic of already standardised formal description techniques (FDTs) despite of their popularity in the design of hardware and software systems.

Keywords: Formal methods, testing and validation, finite state machines, formal description techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061
7436 Food Quality Labels and their Perception by Consumers in the Czech Republic

Authors: Sarka Velcovska

Abstract:

The paper deals with quality labels used in the food products market, especially with labels of quality, labels of origin, and labels of organic farming. The aim of the paper is to identify perception of these labels by consumers in the Czech Republic. The first part refers to the definition and specification of food quality labels that are relevant in the Czech Republic. The second part includes the discussion of marketing research results. Data were collected with personal questioning method. Empirical findings on 150 respondents are related to consumer awareness and perception of national and European food quality labels used in the Czech Republic, attitudes to purchases of labelled products, and interest in information regarding the labels. Statistical methods, in the concrete Pearson´s chi-square test of independence, coefficient of contingency, and coefficient of association are used to determinate if significant differences do exist among selected demographic categories of Czech consumers.

Keywords: Food quality labels, quality labels awareness, quality labels perception, marketing research.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2289
7435 A Temporal QoS Ontology for ERTMS/ETCS

Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien

Abstract:

Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are presented.

Keywords: System Requirement Specification, ERTMS/ETCS, Temporal Ontologies, Domain Ontologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3104
7434 Evolutionary Decision Trees and Software Metrics for Module Defects Identification

Authors: Monica Chiş

Abstract:

Software metric is a measure of some property of a piece of software or its specification. The aim of this paper is to present an application of evolutionary decision trees in software engineering in order to classify the software modules that have or have not one or more reported defects. For this some metrics are used for detecting the class of modules with defects or without defects.

Keywords: Evolutionary decision trees, decision trees, softwaremetrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
7433 Big Data: Big Challenges to Privacy and Data Protection

Authors: Abu Bakar Munir, Siti Hajar Mohd Yasin, Firdaus Muhammad-Sukki

Abstract:

This paper seeks to analyse the benefits of big data and more importantly the challenges it pose to the subject of privacy and data protection. First, the nature of big data will be briefly deliberated before presenting the potential of big data in the present days. Afterwards, the issue of privacy and data protection is highlighted before discussing the challenges of implementing this issue in big data. In conclusion, the paper will put forward the debate on the adequacy of the existing legal framework in protecting personal data in the era of big data.

Keywords: Big data, data protection, information, privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3877
7432 Result Validation Analysis of Steel Testing Machines

Authors: Wasiu O. Ajagbe, Habeeb O. Hamzat, Waris A. Adebisi

Abstract:

Structural failures occur due to a number of reasons. These may include under design, poor workmanship, substandard materials, misleading laboratory tests and lots more. Reinforcing steel bar is an important construction material, hence its properties must be accurately known before being utilized in construction. Understanding this property involves carrying out mechanical tests prior to design and during construction to ascertain correlation using steel testing machine which is usually not readily available due to the location of project. This study was conducted to determine the reliability of reinforcing steel testing machines. Reconnaissance survey was conducted to identify laboratories where yield and ultimate tensile strengths tests can be carried out. Six laboratories were identified within Ibadan and environs. However, only four were functional at the time of the study. Three steel samples were tested for yield and tensile strengths, using a steel testing machine, at each of the four laboratories (LM, LO, LP and LS). The yield and tensile strength results obtained from the laboratories were compared with the manufacturer’s specification using a reliability analysis programme. Structured questionnaire was administered to the operators in each laboratory to consider their impact on the test results. The average value of manufacturers’ tensile strength and yield strength are 673.7 N/mm2 and 559.7 N/mm2 respectively. The tensile strength obtained from the four laboratories LM, LO, LP and LS are given as 579.4, 652.7, 646.0 and 649.9 N/mm2 respectively while their yield strengths respectively are 453.3, 597.0, 550.7 and 564.7 N/mm2. Minimum tensile to yield strength ratio is 1.08 for BS 4449: 2005 and 1.15 for ASTM A615. Tensile to yield strength ratio from the four laboratories are 1.28, 1.09, 1.17 and 1.15 for LM, LO, LP and LS respectively. The tensile to yield strength ratio shows that the result obtained from all the laboratories meet the code requirements used for the test. The result of the reliability test shows varying level of reliability between the manufacturers’ specification and the result obtained from the laboratories. Three of the laboratories; LO, LS and LP have high value of reliability with the manufacturer i.e. 0.798, 0.866 and 0.712 respectively. The fourth laboratory, LM has a reliability value of 0.100. Steel test should be carried out in a laboratory using the same code in which the structural design was carried out. More emphasis should be laid on the importance of code provisions.

Keywords: Reinforcing steel bars, reliability analysis, tensile strength, universal testing machine, yield strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 711
7431 Functionality of Negotiation Agent on Value-based Design Decision

Authors: Arazi Idrus, Christiono Utomo

Abstract:

This paper presents functionality of negotiation agent on value-based design decision. The functionality is based on the characteristics of the system and goal specification. A Prometheus Design Tool model was used for developing the system. Group functionality will be the attribute for negotiation agents, which comprises a coordinator agent and decision- maker agent. The results of the testing of the system to a building system selection on valuebased decision environment are also presented.

Keywords: Functionality, negotiation agent, value-baseddecision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
7430 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: Data mining, digital libraries, digital preservation, file format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
7429 Complexity of Component-based Development of Embedded Systems

Authors: M. Zheng, V. S. Alagar

Abstract:

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

Keywords: Components, embedded systems, complexity, softwaredevelopment, traffic controller system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
7428 Robust Fuzzy Observer Design for Nonlinear Systems

Authors: Michal Polanský, C. Ardil

Abstract:

This paper shows a new method for design of fuzzy observers for Takagi-Sugeno systems. The method is based on Linear matrix inequalities (LMIs) and it allows to insert H constraint into the design procedure. The speed of estimation can tuned be specification of a decay rate of the observer closed loop system. We discuss here also the influence of parametric uncertainties at the output control system stability.

Keywords: H norm, Linear Matrix Inequalities, Observers, Takagi-Sugeno Systems, Parallel Distributed Compensation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2519
7427 Calculus-based Runtime Verification

Authors: Xuan Qi, Changzhi Zhao

Abstract:

In this paper, a uniform calculus-based approach for synthesizing monitors checking correctness properties specified by a large variety of logics at runtime is provided, including future and past time logics, interval logics, state machine and parameterized temporal logics. We present a calculus mechanism to synthesize monitors from the logical specification for the incremental analysis of execution traces during test and real run. The monitor detects both good and bad prefix of a particular kind, namely those that are informative for the property under investigation. We elaborate the procedure of calculus as monitors.

Keywords: calculus, eagle logic, monitor synthesis, runtime verification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
7426 Interoperability in Component Based Software Development

Authors: M. Madiajagan, B. Vijayakumar

Abstract:

The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.

Keywords: Interoperability, component packaging, communication technology, heterogeneous platform, component interface, middleware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2754
7425 Specifying a Timestamp-based Protocol For Multi-step Transactions Using LTL

Authors: Rafat Alshorman, Walter Hussak

Abstract:

Most of the concurrent transactional protocols consider serializability as a correctness criterion of the transactions execution. Usually, the proof of the serializability relies on mathematical proofs for a fixed finite number of transactions. In this paper, we introduce a protocol to deal with an infinite number of transactions which are iterated infinitely often. We specify serializability of the transactions and the protocol using a specification language based on temporal logics. It is worthwhile using temporal logics such as LTL (Lineartime Temporal Logic) to specify transactions, to gain full automatic verification by using model checkers.

Keywords: Multi-step transactions, LTL specifications, Model Checking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
7424 Data Preprocessing for Supervised Leaning

Authors: S. B. Kotsiantis, D. Kanellopoulos, P. E. Pintelas

Abstract:

Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. It is well known that data preparation and filtering steps take considerable amount of processing time in ML problems. Data pre-processing includes data cleaning, normalization, transformation, feature extraction and selection, etc. The product of data pre-processing is the final training set. It would be nice if a single sequence of data pre-processing algorithms had the best performance for each data set but this is not happened. Thus, we present the most well know algorithms for each step of data pre-processing so that one achieves the best performance for their data set.

Keywords: Data mining, feature selection, data cleaning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5965
7423 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: Analytics, Big Data in Education, Hadoop, Learning Analytics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4835
7422 Improving the Reusability and Interoperability of E-Learning Material

Authors: D. Del Corso, A. Tartaglia, E. Tresso, M. Cambiolo, L. Forno, G. Morrone

Abstract:

A key requirement for e-learning materials is reusability and interoperability, that is the possibility to use at least part of the contents in different courses, and to deliver them trough different platforms. These features make possible to limit the cost of new packages, but require the development of material according to proper specifications. SCORM (Sharable Content Object Reference Model) is a set of guidelines suitable for this purpose. A specific adaptation project has been started to make possible to reuse existing materials. The paper describes the main characteristics of SCORM specification, and the procedure used to modify the existing material.

Keywords: SCORM, e-learning, standard, educational effectiveness, assessment, methodology, open access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
7421 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, Wang Qun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSql), and gives 6 data cleaning methods based on these algorithms.

Keywords: Data cleaning, dependency rules, violation data discovery, data repair.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2577
7420 Coalescing Data Marts

Authors: N. Parimala, P. Pahwa

Abstract:

OLAP uses multidimensional structures, to provide access to data for analysis. Traditionally, OLAP operations are more focused on retrieving data from a single data mart. An exception is the drill across operator. This, however, is restricted to retrieving facts on common dimensions of the multiple data marts. Our concern is to define further operations while retrieving data from multiple data marts. Towards this, we have defined six operations which coalesce data marts. While doing so we consider the common as well as the non-common dimensions of the data marts.

Keywords: Data warehouse, Dimension, OLAP, Star Schema.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
7419 Development of Maximum Entropy Method for Prediction of Droplet-size Distribution in Primary Breakup Region of Spray

Authors: E. Movahednejad, F. Ommi

Abstract:

Droplet size distributions in the cold spray of a fuel are important in observed combustion behavior. Specification of droplet size and velocity distributions in the immediate downstream of injectors is also essential as boundary conditions for advanced computational fluid dynamics (CFD) and two-phase spray transport calculations. This paper describes the development of a new model to be incorporated into maximum entropy principle (MEP) formalism for prediction of droplet size distribution in droplet formation region. The MEP approach can predict the most likely droplet size and velocity distributions under a set of constraints expressing the available information related to the distribution. In this article, by considering the mechanisms of turbulence generation inside the nozzle and wave growth on jet surface, it is attempted to provide a logical framework coupling the flow inside the nozzle to the resulting atomization process. The purpose of this paper is to describe the formulation of this new model and to incorporate it into the maximum entropy principle (MEP) by coupling sub-models together using source terms of momentum and energy. Comparison between the model prediction and experimental data for a gas turbine swirling nozzle and an annular spray indicate good agreement between model and experiment.

Keywords: Droplet, instability, Size Distribution, Turbulence, Maximum Entropy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540
7418 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity

Authors: Hoda A. Abdel Hafez

Abstract:

Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.

Keywords: Mining Big Data, Big Data, Machine learning, Data Streams, Telecommunication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2435
7417 Inverse Heat Conduction Analysis of Cooling on Run Out Tables

Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi

Abstract:

In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.

Keywords: Inverse Analysis, Function Specification, Neural Net Works, Particle Swarm, Run Out Table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
7416 Modelling Multiagent Systems

Authors: Gilbert Ndjatou

Abstract:

We propose a formal framework for the specification of the behavior of a system of agents, as well as those of the constituting agents. This framework allows us to model each agent-s effectoric capability including its interactions with the other agents. We also provide an algorithm based on Milner-s "observation equivalence" to derive an agent-s perception of its task domain situations from its effectoric capability, and use "system computations" to model the coordinated efforts of the agents in the system . Formal definitions of the concept of "behavior equivalence" of two agents and that of system computations equivalence for an agent are also provided.

Keywords: Multiagent system, object system, observation equivalence, reactive systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
7415 PEIBM- Perceiving Emotions using an Intelligent Behavioral Model

Authors: Maryam Humayun, Zafar I. Malik, Shaukat Ali

Abstract:

Computer animation is a widely adopted technique used to specify the movement of various objects on screen. The key issue of this technique is the specification of motion. Motion Control Methods are such methods which are used to specify the actions of objects. This paper discusses the various types of motion control methods with special focus on behavioral animation. A behavioral model is also proposed which takes into account the emotions and perceptions of an actor which in turn generate its behavior. This model makes use of an expert system to generate tasks for the actors which specify the actions to be performed in the virtual environment.

Keywords: Behavioral animation, emotion, expert system, perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
7414 Comparative Analysis of Diverse Collection of Big Data Analytics Tools

Authors: S. Vidhya, S. Sarumathi, N. Shanthi

Abstract:

Over the past era, there have been a lot of efforts and studies are carried out in growing proficient tools for performing various tasks in big data. Recently big data have gotten a lot of publicity for their good reasons. Due to the large and complex collection of datasets it is difficult to process on traditional data processing applications. This concern turns to be further mandatory for producing various tools in big data. Moreover, the main aim of big data analytics is to utilize the advanced analytic techniques besides very huge, different datasets which contain diverse sizes from terabytes to zettabytes and diverse types such as structured or unstructured and batch or streaming. Big data is useful for data sets where their size or type is away from the capability of traditional relational databases for capturing, managing and processing the data with low-latency. Thus the out coming challenges tend to the occurrence of powerful big data tools. In this survey, a various collection of big data tools are illustrated and also compared with the salient features.

Keywords: Big data, Big data analytics, Business analytics, Data analysis, Data visualization, Data discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3744
7413 Investigation and Calculation of Seismic Reliability of Structures

Authors: Panam. Zarfam, Mohsen. Javan Pour

Abstract:

Recently, analysis and designing of the structures based on the Reliability theory have been the center of attention. Reason of this attention is the existence of the natural and random structural parameters such as the material specification, external loads, geometric dimensions etc. By means of the Reliability theory, uncertainties resulted from the statistical nature of the structural parameters can be changed into the mathematical equations and the safety and operational considerations can be considered in the designing process. According to this theory, it is possible to study the destruction probability of not only a specific element but also the entire system. Therefore, after being assured of safety of every element, their reciprocal effects on the safety of the entire system can be investigated.

Keywords: Probability, Reliability, Statistics, Uncertainty

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
7412 Design and Layout of a X-Band MMIC Power Amplifier in a Phemt Technology

Authors: Renbin Dai, Rana Arslan Ali Khan

Abstract:

The design of Class A and Class AB 2-stage X band Power Amplifier is described in this report. This power amplifier is part of a transceiver used in radar for monitoring iron characteristics in a blast furnace. The circuit was designed using foundry WIN Semiconductors. The specification requires 15dB gain in the linear region, VSWR nearly 1 at input as well as at the output, an output power of 10 dBm and good stable performance in the band 10.9-12.2 GHz. The design was implemented by using inter-stage configuration, the Class A amplifier was chosen for driver stage i.e. the first amplifier focusing on the gain and the output amplifier conducted at Class AB with more emphasis on output power.

Keywords: Power amplifier, Class AB, Class A, MMIC, 2-stage, X band.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2921
7411 Multi-labeled Data Expressed by a Set of Labels

Authors: Tetsuya Furukawa, Masahiro Kuzunishi

Abstract:

Collected data must be organized to be utilized efficiently, and hierarchical classification of data is efficient approach to organize data. When data is classified to multiple categories or annotated with a set of labels, users request multi-labeled data by giving a set of labels. There are several interpretations of the data expressed by a set of labels. This paper discusses which data is expressed by a set of labels by introducing orders for sets of labels and shows that there are four types of orders, which are characterized by whether the labels of expressed data includes every label of the given set of labels within the range of the set. Desirable properties of the orders, data is also expressed by the higher set of labels and different sets of labels express different data, are discussed for the orders.

Keywords: Classification Hierarchies, Multi-labeled Data, Multiple Classificaiton, Orders of Sets of Labels

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1273
7410 Towards an Enhanced Stochastic Simulation Model for Risk Analysis in Highway Construction

Authors: Anshu Manik, William G. Buttlar, Kasthurirangan Gopalakrishnan

Abstract:

Over the years, there is a growing trend towards quality-based specifications in highway construction. In many Quality Control/Quality Assurance (QC/QA) specifications, the contractor is primarily responsible for quality control of the process, whereas the highway agency is responsible for testing the acceptance of the product. A cooperative investigation was conducted in Illinois over several years to develop a prototype End-Result Specification (ERS) for asphalt pavement construction. The final characteristics of the product are stipulated in the ERS and the contractor is given considerable freedom in achieving those characteristics. The risk for the contractor or agency depends on how the acceptance limits and processes are specified. Stochastic simulation models are very useful in estimating and analyzing payment risk in ERS systems and these form an integral part of the Illinois-s prototype ERS system. This paper describes the development of an innovative methodology to estimate the variability components in in-situ density, air voids and asphalt content data from ERS projects. The information gained from this would be crucial in simulating these ERS projects for estimation and analysis of payment risks associated with asphalt pavement construction. However, these methods require at least two parties to conduct tests on all the split samples obtained according to the sampling scheme prescribed in present ERS implemented in Illinois.

Keywords: Asphalt Pavement, Risk Analysis, StochasticSimulation, QC/QA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478