Search results for: Data Mining.
6861 Using Automated Database Reverse Engineering for Database Integration
Authors: M. R. Abbasifard, M. Rahgozar, A. Bayati, P. Pournemati
Abstract:
One important problem in today organizations is the existence of non-integrated information systems, inconsistency and lack of suitable correlations between legacy and modern systems. One main solution is to transfer the local databases into a global one. In this regards we need to extract the data structures from the legacy systems and integrate them with the new technology systems. In legacy systems, huge amounts of a data are stored in legacy databases. They require particular attention since they need more efforts to be normalized, reformatted and moved to the modern database environments. Designing the new integrated (global) database architecture and applying the reverse engineering requires data normalization. This paper proposes the use of database reverse engineering in order to integrate legacy and modern databases in organizations. The suggested approach consists of methods and techniques for generating data transformation rules needed for the data structure normalization.Keywords: Reverse Engineering, Database Integration, System Integration, Data Structure Normalization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18526860 Analysis of Cooperative Learning Behavior Based on the Data of Students' Movement
Authors: Wang Lin, Li Zhiqiang
Abstract:
The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.Keywords: Behavior pattern, cooperative learning, data analyze, K-means clustering algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8146859 Impovement of a Label Extraction Method for a Risk Search System
Authors: Shigeaki Sakurai, Ryohei Orihara
Abstract:
This paper proposes an improvement method of classification efficiency in a classification model. The model is used in a risk search system and extracts specific labels from articles posted at bulletin board sites. The system can analyze the important discussions composed of the articles. The improvement method introduces ensemble learning methods that use multiple classification models. Also, it introduces expressions related to the specific labels into generation of word vectors. The paper applies the improvement method to articles collected from three bulletin board sites selected by users and verifies the effectiveness of the improvement method.Keywords: Text mining, Risk search system, Corporate reputation, Bulletin board site, Ensemble learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13256858 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.
Keywords: Cloud storage security, sharing storage, attributes, Hash algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10376857 Multimethod Approach to Research in Interlanguage Pragmatics
Authors: Saad Al-Gahtani, Ghassan H Al Shatter
Abstract:
Argument over the use of particular method in interlanguage pragmatics has increased recently. Researchers argued the advantages and disadvantages of each method either natural or elicited. Findings of different studies indicated that the use of one method may not provide enough data to answer all its questions. The current study investigated the validity of using multimethod approach in interlanguage pragmatics to understand the development of requests in Arabic as a second language (Arabic L2). To this end, the study adopted two methods belong to two types of data sources: the institutional discourse (natural data), and the role play (elicited data). Participants were 117 learners of Arabic L2 at the university level, representing four levels (beginners, low-intermediate, highintermediate, and advanced). Results showed that using two or more methods in interlanguage pragmatics affect the size and nature of data.
Keywords: Arabic L2, Development of requests, Interlanguage Pragmatics, Multimethod approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18306856 Design of Integration Security System using XML Security
Authors: Juhan Kim, Soohyung Kim, Kiyoung Moon
Abstract:
In this paper, we design an integration security system that provides authentication service, authorization service, and management service of security data and a unified interface for the management service. The interface is originated from XKMS protocol and is used to manage security data such as XACML policies, SAML assertions and other authentication security data including public keys. The system includes security services such as authentication, authorization and delegation of authentication by employing SAML and XACML based on security data such as authentication data, attributes information, assertions and polices managed with the interface in the system. It also has SAML producer that issues assertions related on the result of the authentication and the authorization services.Keywords: XML, XML Security, XACML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14296855 An Evaluation Model for Semantic Enablement of Virtual Research Environments
Authors: Tristan O'Neill, Trina Myers, Jarrod Trevathan
Abstract:
The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for crossdomain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Keywords: Virtual research environment, Semantic Web, performance analysis, tropical data hub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17836854 Dimension Reduction of Microarray Data Based on Local Principal Component
Authors: Ali Anaissi, Paul J. Kennedy, Madhu Goyal
Abstract:
Analysis and visualization of microarraydata is veryassistantfor biologists and clinicians in the field of diagnosis and treatment of patients. It allows Clinicians to better understand the structure of microarray and facilitates understanding gene expression in cells. However, microarray dataset is a complex data set and has thousands of features and a very small number of observations. This very high dimensional data set often contains some noise, non-useful information and a small number of relevant features for disease or genotype. This paper proposes a non-linear dimensionality reduction algorithm Local Principal Component (LPC) which aims to maps high dimensional data to a lower dimensional space. The reduced data represents the most important variables underlying the original data. Experimental results and comparisons are presented to show the quality of the proposed algorithm. Moreover, experiments also show how this algorithm reduces high dimensional data whilst preserving the neighbourhoods of the points in the low dimensional space as in the high dimensional space.
Keywords: Linear Dimension Reduction, Non-Linear Dimension Reduction, Principal Component Analysis, Biologists.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15746853 Heterogeneous Attribute Reduction in Noisy System based on a Generalized Neighborhood Rough Sets Model
Authors: Siyuan Jing, Kun She
Abstract:
Neighborhood Rough Sets (NRS) has been proven to be an efficient tool for heterogeneous attribute reduction. However, most of researches are focused on dealing with complete and noiseless data. Factually, most of the information systems are noisy, namely, filled with incomplete data and inconsistent data. In this paper, we introduce a generalized neighborhood rough sets model, called VPTNRS, to deal with the problem of heterogeneous attribute reduction in noisy system. We generalize classical NRS model with tolerance neighborhood relation and the probabilistic theory. Furthermore, we use the neighborhood dependency to evaluate the significance of a subset of heterogeneous attributes and construct a forward greedy algorithm for attribute reduction based on it. Experimental results show that the model is efficient to deal with noisy data.Keywords: attribute reduction, incomplete data, inconsistent data, tolerance neighborhood relation, rough sets
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15886852 A Mobile Agent-based Clustering Data Fusion Algorithm in WSN
Authors: Xiangbin Zhu, Wenjuan Zhang
Abstract:
In wireless sensor networks,the mobile agent technology is used in data fusion. According to the node residual energy and the results of partial integration,we design the node clustering algorithm. Optimization of mobile agent in the routing within the cluster strategy for wireless sensor networks to further reduce the amount of data transfer. Through the experiments, using mobile agents in the integration process within the cluster can be reduced the path loss in some extent.
Keywords: wireless sensor networks, data fusion, mobile agent
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15116851 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12356850 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10246849 Making Data Structures and Algorithms more Understandable by Programming Sudoku the Human Way
Authors: Roelien Goede
Abstract:
Data Structures and Algorithms is a module in most Computer Science or Information Technology curricula. It is one of the modules most students identify as being difficult. This paper demonstrates how programming a solution for Sudoku can make abstract concepts more concrete. The paper relates concepts of a typical Data Structures and Algorithms module to a step by step solution for Sudoku in a human type as opposed to a computer oriented solution.Keywords: Data Structures, Algorithms, Sudoku, ObjectOriented Programming, Programming Teaching, Education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30976848 Advanced Information Extraction with n-gram based LSI
Authors: Ahmet Güven, Ö. Özgür Bozkurt, Oya Kalıpsız
Abstract:
Number of documents being created increases at an increasing pace while most of them being in already known topics and little of them introducing new concepts. This fact has started a new era in information retrieval discipline where the requirements have their own specialties. That is digging into topics and concepts and finding out subtopics or relations between topics. Up to now IR researches were interested in retrieving documents about a general topic or clustering documents under generic subjects. However these conventional approaches can-t go deep into content of documents which makes it difficult for people to reach to right documents they were searching. So we need new ways of mining document sets where the critic point is to know much about the contents of the documents. As a solution we are proposing to enhance LSI, one of the proven IR techniques by supporting its vector space with n-gram forms of words. Positive results we have obtained are shown in two different application area of IR domain; querying a document database, clustering documents in the document database.Keywords: Document clustering, Information Extraction, Information Retrieval, LSI, n-gram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18036847 Construction Of Decentralized Lifetime Maximizing Tree for Data Aggregation in Wireless Sensor Networks
Authors: Deepali Virmani , Satbir Jain
Abstract:
To meet the demands of wireless sensor networks (WSNs) where data are usually aggregated at a single source prior to transmitting to any distant user, there is a need to establish a tree structure inside any given event region. In this paper , a novel technique to create one such tree is proposed .This tree preserves the energy and maximizes the lifetime of event sources while they are constantly transmitting for data aggregation. The term Decentralized Lifetime Maximizing Tree (DLMT) is used to denote this tree. DLMT features in nodes with higher energy tend to be chosen as data aggregating parents so that the time to detect the first broken tree link can be extended and less energy is involved in tree maintenance. By constructing the tree in such a way, the protocol is able to reduce the frequency of tree reconstruction, minimize the amount of data loss ,minimize the delay during data collection and preserves the energy.Keywords: branch energy, decentralized, energy level , lifetime, tree energy, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14886846 Effects of Data Correlation in a Sparse-View Compressive Sensing Based Image Reconstruction
Authors: Sajid Abbas, Joon Pyo Hong, Jung-Ryun Lee, Seungryong Cho
Abstract:
Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Keywords: Computed tomography, Computed laminography, Compressive sending, Low-dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16726845 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: Real-Time Spatial Big Data, Quality Of Service, Vertical partitioning, Horizontal partitioning, Matching algorithm, Hamming distance, Stream query.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10566844 The Impact of the General Data Protection Regulation on Human Resources Management in Schools
Authors: Alexandra Aslanidou
Abstract:
The General Data Protection Regulation (GDPR), concerning the protection of natural persons within the European Union with regard to the processing of personal data and on the free movement of such data, became applicable in the European Union (EU) on 25 May 2018 and transformed the way personal data were being treated under the Data Protection Directive (DPD) regime, generating sweeping organizational changes to both public sector and business. A social practice that is considerably influenced in the way of its day-to-day operations is Human Resource (HR) management, for which the importance of GDPR cannot be underestimated. That is because HR processes personal data coming in all shapes and sizes from many different systems and sources. The significance of the proper functioning of an HR department, specifically in human-centered, service-oriented environments such as the education field, is decisive due to the fact that HR operations in schools, conducted effectively, determine the quality of the provided services and consequently have a considerable impact on the success of the educational system. The purpose of this paper is to analyze the decisive role that GDPR plays in HR departments that operate in schools and in order to practically evaluate the aftermath of the Regulation during the first months of its applicability; a comparative use cases analysis in five highly dynamic schools, across three EU Member States, was attempted.
Keywords: General data protection regulation, human resource management, educational system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7526843 A Data Warehouse System to Help Assist Breast Cancer Screening in Diagnosis, Education and Research
Authors: Souâd Demigha
Abstract:
Early detection of breast cancer is considered as a major public health issue. Breast cancer screening is not generalized to the entire population due to a lack of resources, staff and appropriate tools. Systematic screening can result in a volume of data which can not be managed by present computer architecture, either in terms of storage capabilities or in terms of exploitation tools. We propose in this paper to design and develop a data warehouse system in radiology-senology (DWRS). The aim of such a system is on one hand, to support this important volume of information providing from multiple sources of data and images and for the other hand, to help assist breast cancer screening in diagnosis, education and research.Keywords: Breast cancer screening, data warehouse, diagnosis, education, research.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17146842 Data Security in a DApp Twitter Alike on Web 3.0 With Blockchain Based Technology
Authors: Vishal Awasthi, Tanya Soni, Vigya Awasthi, Swati Singh, Shivali Verma
Abstract:
There is a growing demand for a network that grants a high level of data security and confidentiality. For this reason, the semantic web was introduced, which allows data to be shared and reused across applications while safeguarding users privacy and user’s will grab back control of their data. The earlier Web 1.0 and Web 2.0 versions were built on client-server architecture, in which there was the risk of data theft and unconsented sale of user data. A decentralized version, Known as Web 3.0, that is mostly built on blockchain technology was interjected to resolve these issues. The recent research focuses on blockchain technology, deals with privacy, security, transparency, and innovation of decentralized applications (DApps), e.g. a Twitter Clone, Whatsapp clone. In this paper the Twitter Alike built on the Ethereum blockchain will replace traditional techniques with improved latency, throughput, and data ownership. The central principle of this DApp is smart contract implemented using Solidity which is an object- oriented and highlevel language. Consequently, this will provide a better Quality Services, high data security, and integrity for both present and future internet technologies.
Keywords: Blockchain, DApps, Ethereum, Semantic Web, Smart Contract, Solidity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3306841 Improvement of Data Transfer over Simple Object Access Protocol (SOAP)
Authors: Khaled Ahmed Kadouh, Kamal Ali Albashiri
Abstract:
This paper presents a designed algorithm involves improvement of transferring data over Simple Object Access Protocol (SOAP). The aim of this work is to establish whether using SOAP in exchanging XML messages has any added advantages or not. The results showed that XML messages without SOAP take longer time and consume more memory, especially with binary data.
Keywords: JAX-WS, SMTP, SOAP, Web service, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21236840 Numerical Simulations of Flood and Inundation in Jobaru River Basin Using Laser Profiler Data
Authors: Hiroto Nakashima, Toshihiro Morita, Koichiro Ohgushi
Abstract:
Laser Profiler (LP) data from aerial laser surveys have been increasingly used as topographical inputs to numerical simulations of flooding and inundation in river basins. LP data has great potential for reproducing topography, but its effective usage has not yet been fully established. In this study, flooding and inundation are simulated numerically using LP data for the Jobaru River basin of Japan’s Saga Plain. The analysis shows that the topography is reproduced satisfactorily in the computational domain with urban and agricultural areas requiring different grid sizes. A 2-D numerical simulation shows that flood flow behavior changes as grid size is varied.
Keywords: LP data, numerical simulation, topological analysis, mesh size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15356839 Channels Splitting Strategy for Optical Local Area Networks of Passive Star Topology
Authors: Peristera Baziana
Abstract:
In this paper, we present a network configuration for a WDM LANs of passive star topology that assume that the set of data WDM channels is split into two separate sets of channels, with different access rights over them. Especially, a synchronous transmission WDMA access algorithm is adopted in order to increase the probability of successful transmission over the data channels and consequently to reduce the probability of data packets transmission cancellation in order to avoid the data channels collisions. Thus, a control pre-transmission access scheme is followed over a separate control channel. An analytical Markovian model is studied and the average throughput is mathematically derived. The performance is studied for several numbers of data channels and various values of control phase duration.Keywords: Access algorithm, channels division, collisions avoidance, wavelength division multiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10146838 Data-Reusing Adaptive Filtering Algorithms with Adaptive Error Constraint
Authors: Young-Seok Choi
Abstract:
We present a family of data-reusing and affine projection algorithms. For identification of a noisy linear finite impulse response channel, a partial knowledge of a channel, especially noise, can be used to improve the performance of the adaptive filter. Motivated by this fact, the proposed scheme incorporates an estimate of a knowledge of noise. A constraint, called the adaptive noise constraint, estimates an unknown information of noise. By imposing this constraint on a cost function of data-reusing and affine projection algorithms, a cost function based on the adaptive noise constraint and Lagrange multiplier is defined. Minimizing the new cost function leads to the adaptive noise constrained (ANC) data-reusing and affine projection algorithms. Experimental results comparing the proposed schemes to standard data-reusing and affine projection algorithms clearly indicate their superior performance.Keywords: Data-reusing, affine projection algorithm, error constraint, system identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16196837 Balancing Strategies for Parallel Content-based Data Retrieval Algorithms in a k-tree Structured Database
Authors: Radu Dobrescu, Matei Dobrescu, Daniela Hossu
Abstract:
The paper proposes a unified model for multimedia data retrieval which includes data representatives, content representatives, index structure, and search algorithms. The multimedia data are defined as k-dimensional signals indexed in a multidimensional k-tree structure. The benefits of using the k-tree unified model were demonstrated by running the data retrieval application on a six networked nodes test bed cluster. The tests were performed with two retrieval algorithms, one that allows parallel searching using a single feature, the second that performs a weighted cascade search for multiple features querying. The experiments show a significant reduction of retrieval time while maintaining the quality of results.
Keywords: balancing strategies, multimedia databases, parallelprocessing, retrieval algorithms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14246836 Handling Mobility using Virtual Grid in Static Wireless Sensor Networks
Authors: T.P. Sharma
Abstract:
Querying a data source and routing data towards sink becomes a serious challenge in static wireless sensor networks if sink and/or data source are mobile. Many a times the event to be observed either moves or spreads across wide area making maintenance of continuous path between source and sink a challenge. Also, sink can move while query is being issued or data is on its way towards sink. In this paper, we extend our already proposed Grid Based Data Dissemination (GBDD) scheme which is a virtual grid based topology management scheme restricting impact of movement of sink(s) and event(s) to some specific cells of a grid. This obviates the need for frequent path modifications and hence maintains continuous flow of data while minimizing the network energy consumptions. Simulation experiments show significant improvements in network energy savings and average packet delay for a packet to reach at sink.Keywords: Mobility in WSNs, virtual grid, GBDD, clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15506835 Experimental Modal Analysis and Model Validation of Antenna Structures
Authors: B.R. Potgieter, G. Venter
Abstract:
Numerical design optimization is a powerful tool that can be used by engineers during any stage of the design process. There are many different applications for structural optimization. A specific application that will be discussed in the following paper is experimental data matching. Data obtained through tests on a physical structure will be matched with data from a numerical model of that same structure. The data of interest will be the dynamic characteristics of an antenna structure focusing on the mode shapes and modal frequencies. The structure used was a scaled and simplified model of the Karoo Array Telescope-7 (KAT-7) antenna structure. This kind of data matching is a complex and difficult task. This paper discusses how optimization can assist an engineer during the process of correlating a finite element model with vibration test data.Keywords: Finite Element Model (FEM), Karoo Array Telescope(KAT-7), modal frequencies, mode shapes, optimization, shape optimization, size optimization, vibration tests
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18526834 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
Abstract:
A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.
Keywords: Compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10836833 A Decision Matrix for the Evaluation of Triplestores for Use in a Virtual Research Environment
Authors: Tristan O’Neill, Trina Myers, Jarrod Trevathan
Abstract:
The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for cross-domain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Keywords: Virtual research environment, Semantic Web, performance analysis, tropical data hub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17046832 Real-time Performance Study of EPA Periodic Data Transmission
Authors: Liu Ning, Zhong Chongquan, Teng Hongfei
Abstract:
EPA (Ethernet for Plant Automation) resolves the nondeterministic problem of standard Ethernet and accomplishes real-time communication by means of micro-segment topology and deterministic scheduling mechanism. This paper studies the real-time performance of EPA periodic data transmission from theoretical and experimental perspective. By analyzing information transmission characteristics and EPA deterministic scheduling mechanism, 5 indicators including delivery time, time synchronization accuracy, data-sending time offset accuracy, utilization percentage of configured timeslice and non-RTE bandwidth that can be used to specify the real-time performance of EPA periodic data transmission are presented and investigated. On this basis, the test principles and test methods of the indicators are respectively studied and some formulas for real-time performance of EPA system are derived. Furthermore, an experiment platform is developed to test the indicators of EPA periodic data transmission in a micro-segment. According to the analysis and the experiment, the methods to improve the real-time performance of EPA periodic data transmission including optimizing network structure, studying self-adaptive adjustment method of timeslice and providing data-sending time offset accuracy for configuration are proposed.
Keywords: EPA system, Industrial Ethernet, Periodic data, Real-time performance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468