Search results for: relational database
574 A Detailed Timber Harvest Simulator Coupled with 3-D Visualization
Authors: Jürgen Roßmann, Gerrit Alves
Abstract:
In today-s world, the efficient utilization of wood resources comes more and more to the mind of forest owners. It is a very complex challenge to ensure an efficient harvest of the wood resources. This is one of the scopes the project “Virtual Forest II" addresses. Its core is a database with data about forests containing approximately 260 million trees located in North Rhine-Westphalia (NRW). Based on this data, tree growth simulations and wood mobilization simulations can be conducted. This paper focuses on the latter. It describes a discrete-event-simulation with an attached 3-D real time visualization which simulates timber harvest using trees from the database with different crop resources. This simulation can be displayed in 3-D to show the progress of the wood crop. All the data gathered during the simulation is presented as a detailed summary afterwards. This summary includes cost-benefit calculations and can be compared to those of previous runs to optimize the financial outcome of the timber harvest by exchanging crop resources or modifying their parameters.Keywords: Timber harvest, simulation, 3-D, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382573 An Efficient Graph Query Algorithm Based on Important Vertices and Decision Features
Authors: Xiantong Li, Jianzhong Li
Abstract:
Graph has become increasingly important in modeling complicated structures and schemaless data such as proteins, chemical compounds, and XML documents. Given a graph query, it is desirable to retrieve graphs quickly from a large database via graph-based indices. Different from the existing methods, our approach, called VFM (Vertex to Frequent Feature Mapping), makes use of vertices and decision features as the basic indexing feature. VFM constructs two mappings between vertices and frequent features to answer graph queries. The VFM approach not only provides an elegant solution to the graph indexing problem, but also demonstrates how database indexing and query processing can benefit from data mining, especially frequent pattern mining. The results show that the proposed method not only avoids the enumeration method of getting subgraphs of query graph, but also effectively reduces the subgraph isomorphism tests between the query graph and graphs in candidate answer set in verification stage.Keywords: Decision Feature, Frequent Feature, Graph Dataset, Graph Query
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871572 Specification of Attributes of a Multimedia Presentation for Presentation Manager
Authors: Veli Hakkoymaz, Alpaslan Altunköprü
Abstract:
A multimedia presentation system refers to the integration of a multimedia database with a presentation manager which has the functionality of content selection, organization and playout of multimedia presentations. It requires high performance of involved system components. Starting from multimedia information capture until the presentation delivery, high performance tools are required for accessing, manipulating, storing and retrieving these segments, for transferring and delivering them in a presentation terminal according to a playout order. The organization of presentations is a complex task in that the display order of presentation contents (in time and space) must be specified. A multimedia presentation contains audio, video, images and text media types. The critical decisions for presentation construction include what the contents are, how the contents are organized, and once the decision is made on the organization of the contents of the presentation, it must be conveyed to the end user in the correct organizational order and in a timely fashion. This paper introduces a framework for specification of multimedia presentations and describes the design of sample presentations using this framework from a multimedia database.
Keywords: Multimedia presentation, temporal specification, SMIL, spatial specification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814571 Fuzzy Set Approach to Study Appositives and Its Impact Due to Positional Alterations
Authors: E. Mike Dison, T. Pathinathan
Abstract:
Computing with Words (CWW) and Possibilistic Relational Universal Fuzzy (PRUF) are the two concepts which widely represent and measure the vaguely defined natural phenomenon. In this paper, we study the positional alteration of the phrases by which the impact of a natural language proposition gets affected and/or modified. We observe the gradations due to sensitivity/feeling of a statement towards the positional alterations. We derive the classification and modification of the meaning of words due to the positional alteration. We present the results with reference to set theoretic interpretations.
Keywords: Appositive, computing with words, PRUF, semantic sentiment analysis, set theoretic interpretations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841570 Face Image Coding Using Face Prototyping
Authors: Jaroslav Polec, Lenka Krulikovská, Natália Helešová, Tomáš Hirner
Abstract:
In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Keywords: Triangulation, H.264, Model-based coding, Average face
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725569 Methods for Data Selection in Medical Databases: The Binary Logistic Regression -Relations with the Calculated Risks
Authors: Cristina G. Dascalu, Elena Mihaela Carausu, Daniela Manuc
Abstract:
The medical studies often require different methods for parameters selection, as a second step of processing, after the database-s designing and filling with information. One common task is the selection of fields that act as risk factors using wellknown methods, in order to find the most relevant risk factors and to establish a possible hierarchy between them. Different methods are available in this purpose, one of the most known being the binary logistic regression. We will present the mathematical principles of this method and a practical example of using it in the analysis of the influence of 10 different psychiatric diagnostics over 4 different types of offences (in a database made from 289 psychiatric patients involved in different types of offences). Finally, we will make some observations about the relation between the risk factors hierarchy established through binary logistic regression and the individual risks, as well as the results of Chi-squared test. We will show that the hierarchy built using the binary logistic regression doesn-t agree with the direct order of risk factors, even if it was naturally to assume this hypothesis as being always true.Keywords: Databases, risk factors, binary logisticregression, hierarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1328568 Computational Analysis of Potential Inhibitors Selected Based On Structural Similarity for the Src SH2 Domain
Authors: W. P. Hu, J. V. Kumar, Jeffrey J. P. Tsai
Abstract:
The inhibition of SH2 domain regulated protein-protein interactions is an attractive target for developing an effective chemotherapeutic approach in the treatment of disease. Molecular simulation is a useful tool for developing new drugs and for studying molecular recognition. In this study, we searched potential drug compounds for the inhibition of SH2 domain by performing structural similarity search in PubChem Compound Database. A total of 37 compounds were screened from the database, and then we used the LibDock docking program to evaluate the inhibition effect. The best three compounds (AP22408, CID 71463546 and CID 9917321) were chosen for MD simulations after the LibDock docking. Our results show that the compound CID 9917321 can produce a more stable protein-ligand complex compared to other two currently known inhibitors of Src SH2 domain. The compound CID 9917321 may be useful for the inhibition of SH2 domain based on these computational results. Subsequently experiments are needed to verify the effect of compound CID 9917321 on the SH2 domain in the future studies.
Keywords: Nonpeptide inhibitor, Src SH2 domain, LibDock, molecular dynamics simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2078567 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: Customer value, Huff's Gravity Model, POS, retailer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 614566 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning
Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul
Abstract:
In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.Keywords: Electrocardiogram, dictionary learning, sparse coding, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095565 Computer Proven Correctness of the Rabin Public-Key Scheme
Authors: Johannes Buchmann, Markus Kaiser
Abstract:
We decribe a formal specification and verification of the Rabin public-key scheme in the formal proof system Is-abelle/HOL. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. The analysis presented uses a given database to prove formal properties of our implemented functions with computer support. Thema in task in designing a practical formalization of correctness as well as security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as eficient formal proofs. This yields the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Consequently, we get reliable proofs with a minimal error rate augmenting the used database. This provides a formal basis for more computer proof constructions in this area.Keywords: public-key encryption, Rabin public-key scheme, formalproof system, higher-order logic, formal verification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591564 Relational Framework and its Applications
Authors: Lidia Obojska
Abstract:
This paper has, as its point of departure, the foundational axiomatic theory of E. De Giorgi (1996, Scuola Normale Superiore di Pisa, Preprints di Matematica 26, 1), based on two primitive notions of quality and relation. With the introduction of a unary relation, we develop a system totally based on the sole primitive notion of relation. Such a modification enables a definition of the concept of dynamic unary relation. In this way we construct a simple language capable to express other well known theories such as Robinson-s arithmetic or a piece of a theory of concatenation. A key role in this system plays an abstract relation designated by “( )", which can be interpreted in different ways, but in this paper we will focus on the case when we can perform computations and obtain results.Keywords: language, unary relations, arithmetic, computability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260563 Community Participation for Sustainable Development Tourism in Bang Noi Floating Market, Bangkonti District, Samutsongkhram Province
Authors: Bua Srikos, Phusit Phukamchanoad, Suwaree Yordchim
Abstract:
The purpose is to study the model and characteristic of participation of the suitable community to lead to develop permanent water marketing in Bang Noi Floating Market, Bangkonti District, Samutsongkhram Province. A total of 342 survey questionnaire was administered to potential respondents. The researchers interviewed the leader of the community. Appreciation Influence Control (AIC) was used to talk with 20 villagers on arena. The findings revealed that overall, most people had the middle level of the participation in developing the durable Bang Noi Floating Market, Bangkonti, Samutsongkhram Province and in aspects of gaining benefits from developing it with atmosphere and a beautiful view for tourism. For example, the landscape is beautiful with public utilities. The participation in preserving and developing Bang Noi Floating Market remains in the former way of life. The basic factor of person affects to the participation of people such as age, level of education, career, and income per month. Most participants are the original hosts that have houses and shops located in the marketing and neighbor. These people involve with the benefits and have the power to make a water marketing strategy, the major role to set the information database. It also found that the leader and the villagers play the important role in setting a five-physical database. Data include level of information such as position of village, territory of village, road, river, and premises. Information of culture consists of a two-level of information, interesting point, and Itinerary. The information occurs from presenting and practicing by the leader and villagers in the community.All of phases are presented for listening and investigating database together in both the leader and villagers in the process of participation.
Keywords: Community Participation, Sustainable Development, Encouragement Tourism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913562 Semantic Spatial Objects Data Structure for Spatial Access Method
Authors: Kalum Priyanath Udagepola, Zuo Decheng, Wu Zhibo, Yang Xiaozong
Abstract:
Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Keywords: Outlier, semantic spatial object, spatial objects, SSRO-Tree, topo-semantic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695561 A New Model for Discovering XML Association Rules from XML Documents
Authors: R. AliMohammadzadeh, M. Rahgozar, A. Zarnani
Abstract:
The inherent flexibilities of XML in both structure and semantics makes mining from XML data a complex task with more challenges compared to traditional association rule mining in relational databases. In this paper, we propose a new model for the effective extraction of generalized association rules form a XML document collection. We directly use frequent subtree mining techniques in the discovery process and do not ignore the tree structure of data in the final rules. The frequent subtrees based on the user provided support are split to complement subtrees to form the rules. We explain our model within multi-steps from data preparation to rule generation.Keywords: XML, Data Mining, Association Rule Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631560 Fingerprint Compression Using Multiwavelets
Authors: Sudhakar.R, Jayaraman.S
Abstract:
Large volumes of fingerprints are collected and stored every day in a wide range of applications, including forensics, access control etc. It is evident from the database of Federal Bureau of Investigation (FBI) which contains more than 70 million finger prints. Compression of this database is very important because of this high Volume. The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties which are needed for better performance in compression. New class of wavelets called 'Multiwavelets' which posses more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined for Fingerprints.Keywords: Mutiwavelet, Modified SPIHT Algorithm, SPIHT, Wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612559 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-Time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method as a Web-App is developed for auto-generated data replication to provide a twin of the targeted data structure. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi", has been developed. A special login form has been developed with a special instance of the data validation; this verification process secures the web application from its early stages. The system has been tested and validated, and up to 99% of SQLi attacks have been prevented.
Keywords: SQL injection, attacks, web application, accuracy, database, WebAppShield.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444558 Enhance Security in XML Databases: XLog File for Severity-Aware Trust-Based Access Control
Authors: Asmawi A., Affendey L. S., Udzir N. I., Mahmod R.
Abstract:
The topic of enhancing security in XML databases is important as it includes protecting sensitive data and providing a secure environment to users. In order to improve security and provide dynamic access control for XML databases, we presented XLog file to calculate user trust values by recording users’ bad transaction, errors and query severities. Severity-aware trust-based access control for XML databases manages the access policy depending on users' trust values and prevents unauthorized processes, malicious transactions and insider threats. Privileges are automatically modified and adjusted over time depending on user behaviour and query severity. Logging in database is an important process and is used for recovery and security purposes. In this paper, the Xlog file is presented as a dynamic and temporary log file for XML databases to enhance the level of security.
Keywords: XML database, trust-based access control, severity-aware, trust values, log file.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852557 2D Spherical Spaces for Face Relighting under Harsh Illumination
Authors: Amr Almaddah, Sadi Vural, Yasushi Mae, Kenichi Ohara, Tatsuo Arai
Abstract:
In this paper, we propose a robust face relighting technique by using spherical space properties. The proposed method is done for reducing the illumination effects on face recognition. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients. First, an internal training illumination database is generated by computing face albedo and face normal from 2D images under different lighting conditions. Based on the generated database, we analyze the target face pixels and compare them with the training bootstrap by using pre-generated tiles. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other works, our technique requires no 3D face models for the training process and takes a single 2D image as an input. Experimental results on publicly available databases show that the proposed technique works well under severe lighting conditions with significant improvements on the face recognition rates.Keywords: Face synthesis and recognition, Face illumination recovery, 2D spherical spaces, Vision for graphics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754556 Application of Geo-Informatic Technology in Studying of Land Tenure and Land Use for Cultivation of Cash Crops by Local Communities in the Local Administration Organizations of Phailuang and Maepoon in Lublae District, Uttaradit Province
Authors: Kunchit Pirapake
Abstract:
Application of Geo-Informatic technology in land tenure and land use on the economic crop area, to create sustainable land, access to the area, and produce sustainable food for the demand of its people in the community. The research objectives are to 1) apply Geo-Informatic Technology on land ownership and agricultural land use (cash crops) in the research area, 2) create GIS database on land ownership and land use, 3) create database of an online Geoinformation system on land tenure and land use. The results of this study reveal that, first; the study area is on high slope, mountains and valleys. The land is mainly in the forest zone which was included in the Forest Act 1941 and National Conserved Forest 1964. Residents gained the rights to exploit the land passed down from their ancestors. The practice was recognized by communities. The land was suitable for cultivating a wide variety of economic crops that was the main income of the family. At present the local residents keep expanding the land to grow cash crops. Second; creating a database of the geographic information system consisted of the area range, announcement from the Interior Ministry, interpretation of satellite images, transportation routes, waterways, plots of land with a title deed available at the provincial land office. Most pieces of land without a title deed are located in the forest and national reserve areas. Data were created from a field study and a land zone determined by a GPS. Last; an online Geo-Informatic System can show the information of land tenure and land use of each economic crop. Satellite data with high resolution which could be updated and checked on the online Geo-Informatic System simultaneously.Keywords: Geo-Informatic Technology, Land Tenure, Online Geo-Informatic System, Land Use of cash crops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469555 Correlational Analysis between Brain Dominances and Multiple Intelligences
Authors: Lakshmi Dhandabani, Rajeev Sukumaran
Abstract:
Aim of this research study is to investigate and establish the characteristics of brain dominances (BD) and multiple intelligences (MI). This experimentation has been conducted for the sample size of 552 undergraduate computer-engineering students. In addition, mathematical formulation has been established to exhibit the relation between thinking and intelligence, and its correlation has been analyzed. Correlation analysis has been statistically measured using Pearson’s coefficient. Analysis of the results proves that there is a strong relational existence between thinking and intelligence. This research is carried to improve the didactic methods in engineering learning and also to improve e-learning strategies.Keywords: Thinking style assessment, correlational analysis, mathematical model, data analysis, dynamic equilibrium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876554 A Structural Support Vector Machine Approach for Biometric Recognition
Authors: Vishal Awasthi, Atul Kumar Agnihotri
Abstract:
Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%.Keywords: Face recognition, Principal Component Analysis, PCA, Linear Discriminant Analysis, LDA, Improved Support Vector Machine, iSVM, elastic bunch mapping technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 494553 Physical Evaluation of Selected Malaysian National Rugby Players
Authors: LC Chong , A Yaacob, MH Rosli, Y Adam, A Yusuf , MS Omar-Fauzee, N Sutresna, Berliana, HH Pramono, M Nazrul-Hakim
Abstract:
Currently, there is no database or local norms for the physical performance of Malaysian rugby players. This database or norms are vital for Malaysian-s sports development as programs can be setup to improve the current status. This pilot study was conducted to evaluate the status of our semi professional rugby players. The rugby players were randomly selected from the Malaysian National team and several clubs in the Klang valley, Kuala Lumpur Malaysia. 54 male rugby players (Age: 24.41 ± 4.06 years) were selected for this pilot study. Height, bodyweight, percentage body fat and body mass index (BMI) and several other physical tests were performed. Results from the BLEEP test revealed an average of level 9, shuttle 2 for the players. Interestingly, forwards were taller, heavier, and had lower maximal aerobic power than backs in the same team. In conclusion, the physical characteristics of the rugby players were much lower when compared to international players from other countries. From this pilot study, the physical performance of the Malaysian team must be improved in order to further develop the sports.Keywords: Rugby, Malaysia, Fitness, Collision sports
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3567552 Implementing an Intuitive Reasoner with a Large Weather Database
Authors: Yung-Chien Sun, O. Grant Clark
Abstract:
In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383551 Centralized Resource Management for Network Infrastructure Including Ip Telephony by Integrating a Mediator Between the Heterogeneous Data Sources
Authors: Mohammed Fethi Khalfi, Malika Kandouci
Abstract:
Over the past decade, mobile has experienced a revolution that will ultimately change the way we communicate.All these technologies have a common denominator exploitation of computer information systems, but their operation can be tedious because of problems with heterogeneous data sources.To overcome the problems of heterogeneous data sources, we propose to use a technique of adding an extra layer interfacing applications of management or supervision at the different data sources.This layer will be materialized by the implementation of a mediator between different host applications and information systems frequently used hierarchical and relational manner such that the heterogeneity is completely transparent to the VoIP platform.Keywords: TOIP, Data Integration, Mediation, informationcomputer system, heterogeneous data sources
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332550 The Implementation of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications
Authors: Mohamed R. Mhereeg
Abstract:
The paper discusses the implementation of the MultiAgent classification System (MACS) and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies, which are the .NET widows service based agents, the Windows Communication Foundation (WCF) services, the Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW. The Monitoring Agents (MAs) were configured to execute automatically to monitor excel spreadsheets development activities by content. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent (DUA) residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers.
Keywords: MACS, Implementation, Multi-Agent, SOA, Autonomous, WCF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709549 Dimensional Modeling of HIV Data Using Open Source
Authors: Charles D. Otine, Samuel B. Kucel, Lena Trojer
Abstract:
Selecting the data modeling technique for an information system is determined by the objective of the resultant data model. Dimensional modeling is the preferred modeling technique for data destined for data warehouses and data mining, presenting data models that ease analysis and queries which are in contrast with entity relationship modeling. The establishment of data warehouses as components of information system landscapes in many organizations has subsequently led to the development of dimensional modeling. This has been significantly more developed and reported for the commercial database management systems as compared to the open sources thereby making it less affordable for those in resource constrained settings. This paper presents dimensional modeling of HIV patient information using open source modeling tools. It aims to take advantage of the fact that the most affected regions by the HIV virus are also heavily resource constrained (sub-Saharan Africa) whereas having large quantities of HIV data. Two HIV data source systems were studied to identify appropriate dimensions and facts these were then modeled using two open source dimensional modeling tools. Use of open source would reduce the software costs for dimensional modeling and in turn make data warehousing and data mining more feasible even for those in resource constrained settings but with data available.Keywords: About Database, Data Mining, Data warehouse, Dimensional Modeling, Open Source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959548 Ergonomics and Its Applicability in the Design Process in Egypt Challenges and Prospects
Authors: Mohamed Moheyeldin Mahmoud
Abstract:
Egypt suffers from a severe shortage of data and charts concerning the physical dimensions, measurements, qualities and consumer behavior. The shortage of needed information and appropriate methods has forced the Egyptian designer to use any other foreign standard when designing a product for the Egyptian consumer which has led to many problems. The urgently needed database concerning the physical specifications, measurements of the Egyptian consumers, as well as the need to support the Ergonomics given courses in many colleges and institutes with the latest technologies, is stated as the research problem. Descriptive analytical method relying on the compiling, comparing and analyzing of information and facts in order to get acceptable perceptions, ideas and considerations is the used methodology by the researcher. The research concludes that: 1. Good interaction relationship between users and products shows the success of that product. 2. An integration linkage between the most prominent fields of science specially Ergonomics, Interaction Design and Ethnography should be encouraged to provide an ultimately updated database concerning the nature, specifications and environment of the Egyptian consumer, in order to achieve a higher benefit for both user and product. 3. Chinese economic policy based on the study of market requirements long before any market activities should be emulated. 4. Using Ethnography supports the design activities creating new products or updating existent ones through measuring the compatibility of products with their environment and user expectations, While contracting a joint cooperation between military colleges, sports education institutes from one side, and design institutes from the other side to provide an ultimately updated (annually updated) database concerning some specifications about students of both sexes applying in those institutes (height, weight, etc.) to provide the Industrial designer with the needed information when creating a new product or updating an existing one concerning that category is recommended by the researcher.
Keywords: Adapt ergonomics, ethnography, interaction design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811547 Wireless Sensor Network to Help Low Incomes Farmers to Face Drought Impacts
Authors: Fantazi Walid, Ezzedine Tahar, Bargaoui Zoubeida
Abstract:
This research presents the main ideas to implement an intelligent system composed by communicating wireless sensors measuring environmental data linked to drought indicators (such as air temperature, soil moisture , etc...). On the other hand, the setting up of a spatio temporal database communicating with a Web mapping application for a monitoring in real time in activity 24:00 /day, 7 days/week is proposed to allow the screening of the drought parameters time evolution and their extraction. Thus this system helps detecting surfaces touched by the phenomenon of drought. Spatio-temporal conceptual models seek to answer the users who need to manage soil water content for irrigating or fertilizing or other activities pursuing crop yield augmentation. Effectively, spatiotemporal conceptual models enable users to obtain a diagram of readable and easy data to apprehend. Based on socio-economic information, it helps identifying people impacted by the phenomena with the corresponding severity especially that this information is accessible by farmers and stakeholders themselves. The study will be applied in Siliana watershed Northern Tunisia.Keywords: WSN, database spatio-temporal, GIS, web-mapping, indicator of drought.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445546 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: Big data, k-NN, machine learning, traffic speed prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377545 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division
Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi
Abstract:
In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625