Search results for: spatial data analysis tools.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14333

Search results for: spatial data analysis tools.

14003 Preparation of Computer Model of the Aircraft for Numerical Aeroelasticity Tests – Flutter

Authors: M. Rychlik, R. Roszak, M. Morzynski, M. Nowak, H. Hausa, K. Kotecki

Abstract:

Article presents the geometry and structure reconstruction procedure of the aircraft model for flatter research (based on the I22-IRYDA aircraft). For reconstruction the Reverse Engineering techniques and advanced surface modeling CAD tools are used. Authors discuss all stages of data acquisition process, computation and analysis of measured data. For acquisition the three dimensional structured light scanner was used. In the further sections, details of reconstruction process are present. Geometry reconstruction procedure transform measured input data (points cloud) into the three dimensional parametric computer model (NURBS solid model) which is compatible with CAD systems. Parallel to the geometry of the aircraft, the internal structure (structural model) are extracted and modeled. In last chapter the evaluation of obtained models are discussed.

Keywords: computer modeling, numerical simulation, Reverse Engineering, structural model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
14002 Hybrid Approach for Memory Analysis in Windows System

Authors: Khairul Akram Zainol Ariffin, Ahmad Kamil Mahmood, Jafreezal Jaafar, Solahuddin Shamsuddin

Abstract:

Random Access Memory (RAM) is an important device in computer system. It can represent the snapshot on how the computer has been used by the user. With the growth of its importance, the computer memory has been an issue that has been discussed in digital forensics. A number of tools have been developed to retrieve the information from the memory. However, most of the tools have their limitation in the ability of retrieving the important information from the computer memory. Hence, this paper is aimed to discuss the limitation and the setback for two main techniques such as process signature search and process enumeration. Then, a new hybrid approach will be presented to minimize the setback in both individual techniques. This new approach combines both techniques with the purpose to retrieve the information from the process block and other objects in the computer memory. Nevertheless, the basic theory in address translation for x86 platforms will be demonstrated in this paper.

Keywords: Algorithms, Digital Forensics, Memory Analysis, Signature Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
14001 Integrating Technology into Mathematics Education: A Case Study from Primary Mathematics Students Teachers

Authors: Berna Cantürk-Günhan, Esra Bukova-Güzel

Abstract:

The purpose of the study is to determine the primary mathematics student teachers- views related to use instructional technology tools in course of the learning process and to reveal how the sample presentations towards different mathematical concepts affect their views. This is a qualitative study involving twelve mathematics students from a public university. The data gathered from two semi-structural interviews. The first one was realized in the beginning of the study. After that the representations prepared by the researchers were showed to the participants. These representations contain animations, Geometer-s Sketchpad activities, video-clips, spreadsheets, and power-point presentations. The last interview was realized at the end of these representations. The data from the interviews and content analyses were transcribed and read and reread to explore the major themes. Findings revealed that the views of the students changed in this process and they believed that the instructional technology tools should be used in their classroom.

Keywords: Integrating Technology, Mathematics Education, Primary Education, Teacher Education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
14000 Development of Single Layer of WO3 on Large Spatial Resolution by Atomic Layer Deposition Technique

Authors: S. Zhuiykov, Zh. Hai, H. Xu, C. Xue

Abstract:

Unique and distinctive properties could be obtained on such two-dimensional (2D) semiconductor as tungsten trioxide (WO3) when the reduction from multi-layer to one fundamental layer thickness takes place. This transition without damaging single-layer on a large spatial resolution remained elusive until the atomic layer deposition (ALD) technique was utilized. Here we report the ALD-enabled atomic-layer-precision development of a single layer WO3 with thickness of 0.77±0.07 nm on a large spatial resolution by using (tBuN)2W(NMe2)2 as tungsten precursor and H2O as oxygen precursor, without affecting the underlying SiO2/Si substrate. Versatility of ALD is in tuning recipe in order to achieve the complete WO3 with desired number of WO3 layers including monolayer. Governed by self-limiting surface reactions, the ALD-enabled approach is versatile, scalable and applicable for a broader range of 2D semiconductors and various device applications.

Keywords: Atomic layer deposition, tungsten oxide, WO3, two-dimensional semiconductors, single fundamental layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
13999 The Use of Classifiers in Image Analysis of Oil Wells Profiling Process and the Automatic Identification of Events

Authors: Jaqueline M. R. Vieira

Abstract:

Different strategies and tools are available at the oil and gas industry for detecting and analyzing tension and possible fractures in borehole walls. Most of these techniques are based on manual observation of the captured borehole images. While this strategy may be possible and convenient with small images and few data, it may become difficult and suitable to errors when big databases of images must be treated. While the patterns may differ among the image area, depending on many characteristics (drilling strategy, rock components, rock strength, etc.). In this work we propose the inclusion of data-mining classification strategies in order to create a knowledge database of the segmented curves. These classifiers allow that, after some time using and manually pointing parts of borehole images that correspond to tension regions and breakout areas, the system will indicate and suggest automatically new candidate regions, with higher accuracy. We suggest the use of different classifiers methods, in order to achieve different knowledge dataset configurations.

Keywords: Brazil, classifiers, data-mining, Image Segmentation, oil well visualization, classifiers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536
13998 Virtual Speaking Head for Hearing Impaired Students

Authors: Eva Pajorová, Ladislav Hluchý

Abstract:

Developed tool is one of system tools for easier access to various scientific areas and real time interactive learning between lecturer and for hearing impaired students. There is no demand for the lecturer to know Sign Language (SL). Instead, the new software tools will perform the translation of the regular speech into SL, after which it will be transferred to the student. On the other side, the questions of the student (in SL) will be translated and transferred to the lecturer in text or speech. One of those tools is presented tool. It-s too for developing the correct Speech Visemes as a root of total communication method for hearing impared students.

Keywords: Impared people, sing language, communication methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
13997 The Effects on the People's Preference on the Cityscape by the Spatial Characteristics of the Streetscape-Centered on 'Design Seoul Street'-

Authors: Eun-JungKo, Bur-Deul Yoon, Sung-Won Choi, Hong-Kyu Kim

Abstract:

Jacobs, A.B. (1993) stated that "When I think of a city, the first thing that comes to mind is the street. If the street is interesting, the rest of the city is interesting. If the street is mundane, the city is also mundane." In this statement, he expresses the importance of the streetscape and the street environment. The objective of this paper is to analyze the spatial relationships of the streetscape that affect the general public's preference of the cityscape. Furthermore, this research focuses on the important role that streetscape plays in public perception of the city by the pedestrians who experience it daily. The subject of this paper is eight of the "Design Seoul Street."The analysis and survey results show the preference criteria that affect the streetscape and ultimately the cityscape. This research endeavor shows that differences in physical form, shape, size, color, locations, and context are important.

Keywords: Cityscape, Design Seoul Street, street, streetscape.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
13996 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: M. Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: land use/land cover, random forest, Landsat-8 OLI, Sentinel-2A MSI, Corine land cover

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 309
13995 A Monte Carlo Method to Data Stream Analysis

Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham

Abstract:

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
13994 Pattern Recognition Using Feature Based Die-Map Clusteringin the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: Die-Map Clustering, Feature Extraction, Pattern Recognition, Semiconductor Manufacturing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3136
13993 Adopting Collaborative Business Processes to Prevent the Loss of Information in Public Administration Organisations

Authors: A. Capodieci, G. Del Fiore, L. Mainetti

Abstract:

Recently, the use of web 2.0 tools has increased in companies and public administration organisations. This phenomenon, known as "Enterprise 2.0", has, de facto, modified common organisational and operative practices. This has led “knowledge workers” to change their working practices through the use of Web 2.0 communication tools. Unfortunately, these tools have not been integrated with existing enterprise information systems, a situation that could potentially lead to a loss of information. This is an important problem in an organisational context, because knowledge of information exchanged within the organisation is needed to increase the efficiency and competitiveness of the organisation. In this article we demonstrate that it is possible to capture this knowledge using collaboration processes, which are processes of abstraction created in accordance with design patterns and applied to new organisational operative practices.

Keywords: Business Practices, Business Process Patterns, Collaboration Tools, Enterprise 2.0, Knowledge Workers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
13992 Extraction of Data from Web Pages: A Vision Based Approach

Authors: P. S. Hiremath, Siddu P. Algur

Abstract:

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Keywords: Web data records, web data regions, web mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
13991 Big Bang – Big Crunch Learning Method for Fuzzy Cognitive Maps

Authors: Engin Yesil, Leon Urbas

Abstract:

Modeling of complex dynamic systems, which are very complicated to establish mathematical models, requires new and modern methodologies that will exploit the existing expert knowledge, human experience and historical data. Fuzzy cognitive maps are very suitable, simple, and powerful tools for simulation and analysis of these kinds of dynamic systems. However, human experts are subjective and can handle only relatively simple fuzzy cognitive maps; therefore, there is a need of developing new approaches for an automated generation of fuzzy cognitive maps using historical data. In this study, a new learning algorithm, which is called Big Bang-Big Crunch, is proposed for the first time in literature for an automated generation of fuzzy cognitive maps from data. Two real-world examples; namely a process control system and radiation therapy process, and one synthetic model are used to emphasize the effectiveness and usefulness of the proposed methodology.

Keywords: Big Bang-Big Crunch optimization, Dynamic Systems, Fuzzy Cognitive Maps, Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
13990 A Methodology for Quality Problems Diagnosis in SMEs

Authors: Humberto N. Teixeira, Isabel S. Lopes, Sérgio D. Sousa

Abstract:

This article proposes a new methodology to be used by SMEs (Small and Medium enterprises) to characterize their performance in quality, highlighting weaknesses and area for improvement. The methodology aims to identify the principal causes of quality problems and help to prioritize improvement initiatives. This is a self-assessment methodology that intends to be easy to implement by companies with low maturity level in quality. The methodology is organized in six different steps which includes gathering information about predetermined processes and subprocesses of quality management, defined based on the well-known Juran-s trilogy for quality management (Quality planning, quality control and quality improvement) and, predetermined results categories, defined based on quality concept. A set of tools for data collecting and analysis, such as interviews, flowcharts, process analysis diagrams and Failure Mode and effects Analysis (FMEA) are used. The article also presents the conclusions obtained in the application of the methodology in two cases studies.

Keywords: Continuous improvement, Diagnosis, Quality Management, Self-assessment, SMEs

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2485
13989 Data-organization Before Learning Multi-Entity Bayesian Networks Structure

Authors: H. Bouhamed, A. Rebai, T. Lecroq, M. Jaoua

Abstract:

The objective of our work is to develop a new approach for discovering knowledge from a large mass of data, the result of applying this approach will be an expert system that will serve as diagnostic tools of a phenomenon related to a huge information system. We first recall the general problem of learning Bayesian network structure from data and suggest a solution for optimizing the complexity by using organizational and optimization methods of data. Afterward we proposed a new heuristic of learning a Multi-Entities Bayesian Networks structures. We have applied our approach to biological facts concerning hereditary complex illnesses where the literatures in biology identify the responsible variables for those diseases. Finally we conclude on the limits arched by this work.

Keywords: Data-organization, data-optimization, automatic knowledge discovery, Multi-Entities Bayesian networks, score merging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
13988 Dependability Tools in Multi-Agent Support for Failures Analysis of Computer Networks

Authors: Myriam Noureddine

Abstract:

During their activity, all systems must be operational without failures and in this context, the dependability concept is essential avoiding disruption of their function. As computer networks are systems with the same requirements of dependability, this article deals with an analysis of failures for a computer network. The proposed approach integrates specific tools of the plat-form KB3, usually applied in dependability studies of industrial systems. The methodology is supported by a multi-agent system formed by six agents grouped in three meta agents, dealing with two levels. The first level concerns a modeling step through a conceptual agent and a generating agent. The conceptual agent is dedicated to the building of the knowledge base from the system specifications written in the FIGARO language. The generating agent allows producing automatically both the structural model and a dependability model of the system. The second level, the simulation, shows the effects of the failures of the system through a simulation agent. The approach validation is obtained by its application on a specific computer network, giving an analysis of failures through their effects for the considered network.

Keywords: Computer network, dependability, KB3 plat-form, multi-agent system, failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
13987 Applying Spanning Tree Graph Theory for Automatic Database Normalization

Authors: Chetneti Srisa-an

Abstract:

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Keywords: Relational Database, Functional Dependency, Automatic Normalization, Primary Key, Spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2857
13986 Non-Standard Monetary Policy Measures and Their Consequences

Authors: Aleksandra Nocoń (Szunke)

Abstract:

The study is a review of the literature concerning the consequences of non-standard monetary policy, which are used by central banks during unconventional periods, threatening banking sector instability. In particular, the attention was paid to the effects of non-standard monetary policy tools for financial markets. However, the empirical evidence about their effects and real consequences for financial markets is still not final. The main aim of the study is to survey consequences of standard and non-standard monetary policy instruments, implemented during the global financial crisis in the United States, United Kingdom and euro area, with particular attention to the results for the stabilization of global financial markets. The study consists mainly of the empirical review, indicating the impact of the implementation of these tools for financial markets. The following research methods were used in the study: literature studies, including domestic and foreign literature, cause and effect analysis and statistical analysis.

Keywords: Asset purchase facility, consequences of monetary policy instruments, non-standard monetary policy, Quantitative Easing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
13985 Automating Test Activities: Test Cases Creation, Test Execution, and Test Reporting with Multiple Test Automation Tools

Authors: Loke Mun Sei

Abstract:

Software testing has become a mandatory process in assuring the software product quality. Hence, test management is needed in order to manage the test activities conducted in the software test life cycle. This paper discusses on the challenges faced in the software test life cycle, and how the test processes and test activities, mainly on test cases creation, test execution, and test reporting is being managed and automated using several test automation tools, i.e. Jira, Robot Framework, and Jenkins.

Keywords: Test automation tools, test case, test execution, test reporting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3087
13984 Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure

Authors: Ahmad Fadel Klaib, Zurinahni Zainol, Nurul Hashimah Ahamed, Rosma Ahmad, Wahidah Hussin

Abstract:

Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.

Keywords: Exact String-matching Algorithms, NMRShiftDB, SMILES Format, Antimicrobial Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211
13983 Behavioral Response of Bee Farmers to Climate Change in South East, Nigeria

Authors: Jude A. Mbanasor, Chigozirim N. Onwusiribe

Abstract:

The enigma climate change is no longer an illusion but a reality. In the recent years, the Nigeria climate has changed and the changes are shown by the changing patterns of rainfall, the sunshine, increasing level carbon and nitrous emission as well as deforestation. This study analyzed the behavioural response of bee keepers to variations in the climate and the adaptation techniques developed in response to the climate variation. Beekeeping is a viable economic activity for the alleviation of poverty as the products include honey, wax, pollen, propolis, royal jelly, venom, queens, bees and their larvae and are all marketable. The study adopted the multistage sampling technique to select 120 beekeepers from the five states of Southeast Nigeria. Well-structured questionnaires and focus group discussions were adopted to collect the required data. Statistical tools like the Principal component analysis, data envelopment models, graphs, and charts were used for the data analysis. Changing patterns of rainfall and sunshine with the increasing rate of deforestation had a negative effect on the habitat of the bees. The bee keepers have adopted the Kenya Top bar and Langstroth hives and they establish the bee hives on fallow farmland close to the cultivated communal farms with more flowering crops.

Keywords: Climate, smart, smallholder, farmer, socioeconomic, response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
13982 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
13981 Landscape Data Transformation: Categorical Descriptions to Numerical Descriptors

Authors: Dennis A. Apuan

Abstract:

Categorical data based on description of the agricultural landscape imposed some mathematical and analytical limitations. This problem however can be overcome by data transformation through coding scheme and the use of non-parametric multivariate approach. The present study describes data transformation from qualitative to numerical descriptors. In a collection of 103 random soil samples over a 60 hectare field, categorical data were obtained from the following variables: levels of nitrogen, phosphorus, potassium, pH, hue, chroma, value and data on topography, vegetation type, and the presence of rocks. Categorical data were coded, and Spearman-s rho correlation was then calculated using PAST software ver. 1.78 in which Principal Component Analysis was based. Results revealed successful data transformation, generating 1030 quantitative descriptors. Visualization based on the new set of descriptors showed clear differences among sites, and amount of variation was successfully measured. Possible applications of data transformation are discussed.

Keywords: data transformation, numerical descriptors, principalcomponent analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
13980 Entropy Based Spatial Design: A Genetic Algorithm Approach (Case Study)

Authors: Abbas Siefi, Mohammad Javad Karimifar

Abstract:

We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.

Keywords: Spatial design of experiments, maximum entropy sampling, computer experiments, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
13979 Identifying E-Learning Components at North-West University, Mafikeng Campus

Authors: Sylvia Tumelo Nthutang, Nehemiah Mavetera

Abstract:

Educational institutions are under pressure from their competitors. Regulators and community groups need educational institutions to adopt appropriate business and organizational practices. Globally, educational institutions are now using e-learning as the best teaching and learning approach. E-learning is becoming the center of attention to the learning institutions, educational systems and software inventors. North-West University (NWU) is currently using eFundi, a Learning Management System (LMS). LMS are all information systems and procedures that adds value to students learning and support the learning material in text or any multimedia files. With various e-learning tools, students would be able to access all the materials related to the course in electronic copies. The study was tasked with identifying the e-learning components at the NWU, Mafikeng campus. Quantitative research methodology was considered in data collection and descriptive statistics for data analysis. The Activity Theory (AT) was used as a theory to guide the study. AT outlines the limitations amongst e-learning at the macro-organizational level (plan, guiding principle, campus-wide solutions) and micro-organization (daily functioning practice, collaborative transformation, specific adaptation). On a technological environment, AT gives people an opportunity to change from concentrating on computers as an area of concern but also understand that technology is part of human activities. The findings have identified the university’s current IT tools and knowledge on e-learning elements. It was recommended that university should consider buying computer resources that consumes less power and practice e-learning effectively.

Keywords: E-learning, information and communication technology, teaching, and virtual learning environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1065
13978 Spatial Structure and Process of Arctic Warming and Land Cover Change in the Feedback Systems Framework

Authors: Eric Kojo Wu Aikins

Abstract:

This paper examines the relationships between and among the various drivers of climate change that have both climatic and ecological consequences for vegetation and land cover change in arctic areas, particularly in arctic Alaska. It discusses the various processes that have created spatial and climatic structures that have facilitated observable vegetation and land cover changes in the Arctic. Also, it indicates that the drivers of both climatic and ecological changes in the Arctic are multi-faceted and operate in a system with both positive and negative feedbacks that largely results in further increases or decreases of the initial drivers of climatic and vegetation change mainly at the local and regional scales. It demonstrates that the impact of arctic warming on land cover change and the Arctic ecosystems is not unidirectional and one dimensional in nature but it represents a multi-directional and multi-dimensional forces operating in a feedback system.

Keywords: Arctic Vegetation Change, Climate Change, Feedback System, Spatial Process and Structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
13977 Simulation of Online Communities Using MAS Social and Spatial Organisations

Authors: Maya Rupert, Salima Hassas, Carlos Li, John Sherwood

Abstract:

Online Communities are an example of sociallyaware, self-organising, complex adaptive computing systems. The multi-agent systems (MAS) paradigm coordinated by self-organisation mechanisms has been used as an effective way for the simulation and modeling of such systems. In this paper, we propose a model for simulating an online health community using a situated multi-agent system approach, governed by the co-evolution of the social and spatial organisations of the agents.

Keywords: multi-agent systems, organizations, online communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
13976 Classification and Analysis of Risks in Software Engineering

Authors: Hooman Hoodat, Hassan Rashidi

Abstract:

Despite various methods that exist in software risk management, software projects have a high rate of failure. When complexity and size of the projects are increased, managing software development becomes more difficult. In these projects the need for more analysis and risk assessment is vital. In this paper, a classification for software risks is specified. Then relations between these risks using risk tree structure are presented. Analysis and assessment of these risks are done using probabilistic calculations. This analysis helps qualitative and quantitative assessment of risk of failure. Moreover it can help software risk management process. This classification and risk tree structure can apply to some software tools.

Keywords: Risk analysis, risk assessment, risk classification, risk tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9011
13975 A Novel Web Metric for the Evaluation of Internet Trends

Authors: Radek Malinský, Ivan Jelínek

Abstract:

Web 2.0 (social networking, blogging and online forums) can serve as a data source for social science research because it contains vast amount of information from many different users. The volume of that information has been growing at a very high rate and becoming a network of heterogeneous data; this makes things difficult to find and is therefore not almost useful. We have proposed a novel theoretical model for gathering and processing data from Web 2.0, which would reflect semantic content of web pages in better way. This article deals with the analysis part of the model and its usage for content analysis of blogs. The introductory part of the article describes methodology for the gathering and processing data from blogs. The next part of the article is focused on the evaluation and content analysis of blogs, which write about specific trend.

Keywords: Blog, Sentiment Analysis, Web 2.0, Webometrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3532
13974 Accurate HLA Typing at High-Digit Resolution from NGS Data

Authors: Yazhi Huang, Jing Yang, Dingge Ying, Yan Zhang, Vorasuk Shotelersuk, Nattiya Hirankarn, Pak Chung Sham, Yu Lung Lau, Wanling Yang

Abstract:

Human leukocyte antigen (HLA) typing from next generation sequencing (NGS) data has the potential for applications in clinical laboratories and population genetic studies. Here we introduce a novel technique for HLA typing from NGS data based on read-mapping using a comprehensive reference panel containing all known HLA alleles and de novo assembly of the gene-specific short reads. An accurate HLA typing at high-digit resolution was achieved when it was tested on publicly available NGS data, outperforming other newly-developed tools such as HLAminer and PHLAT.

Keywords: Human leukocyte antigens, next generation sequencing, whole exome sequencing, HLA typing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2611