Search results for: Data Reduction
7731 An Automatic Tool for Checking Consistency between Data Flow Diagrams (DFDs)
Authors: Rosziati Ibrahim, Siow Yen Yen
Abstract:
System development life cycle (SDLC) is a process uses during the development of any system. SDLC consists of four main phases: analysis, design, implement and testing. During analysis phase, context diagram and data flow diagrams are used to produce the process model of a system. A consistency of the context diagram to lower-level data flow diagrams is very important in smoothing up developing process of a system. However, manual consistency check from context diagram to lower-level data flow diagrams by using a checklist is time-consuming process. At the same time, the limitation of human ability to validate the errors is one of the factors that influence the correctness and balancing of the diagrams. This paper presents a tool that automates the consistency check between Data Flow Diagrams (DFDs) based on the rules of DFDs. The tool serves two purposes: as an editor to draw the diagrams and as a checker to check the correctness of the diagrams drawn. The consistency check from context diagram to lower-level data flow diagrams is embedded inside the tool to overcome the manual checking problem.Keywords: Data Flow Diagram, Context Diagram, ConsistencyCheck, Syntax and Semantic Rules
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34387730 Real-Time Implementation of STANAG 4539 High-Speed HF Modem
Authors: S. Saraç, F. Kara, C.Vural
Abstract:
High-frequency (HF) communications have been used by military organizations for more than 90 years. The opportunity of very long range communications without the need for advanced equipment makes HF a convenient and inexpensive alternative of satellite communications. Besides the advantages, voice and data transmission over HF is a challenging task, because the HF channel generally suffers from Doppler shift and spread, multi-path, cochannel interference, and many other sources of noise. In constructing an HF data modem, all these effects must be taken into account. STANAG 4539 is a NATO standard for high-speed data transmission over HF. It allows data rates up to 12800 bps over an HF channel of 3 kHz. In this work, an efficient implementation of STANAG 4539 on a single Texas Instruments- TMS320C6747 DSP chip is described. The state-of-the-art algorithms used in the receiver and the efficiency of the implementation enables real-time high-speed data / digitized voice transmission over poor HF channels.
Keywords: High frequency, modem, STANAG 4539.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53407729 Detection Efficient Enterprises via Data Envelopment Analysis
Authors: S. Turkan
Abstract:
In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.Keywords: Data envelopment analysis, super efficiency, financial ratios, BCC model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8757728 Fusion of ETM+ Multispectral and Panchromatic Texture for Remote Sensing Classification
Authors: Mahesh Pal
Abstract:
This paper proposes to use ETM+ multispectral data and panchromatic band as well as texture features derived from the panchromatic band for land cover classification. Four texture features including one 'internal texture' and three GLCM based textures namely correlation, entropy, and inverse different moment were used in combination with ETM+ multispectral data. Two data sets involving combination of multispectral, panchromatic band and its texture were used and results were compared with those obtained by using multispectral data alone. A decision tree classifier with and without boosting were used to classify different datasets. Results from this study suggest that the dataset consisting of panchromatic band, four of its texture features and multispectral data was able to increase the classification accuracy by about 2%. In comparison, a boosted decision tree was able to increase the classification accuracy by about 3% with the same dataset.Keywords: Internal texture; GLCM; decision tree; boosting; classification accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17357727 Research on Landscape Pattern Revolution of Land Use in Fuxian Lake Basin Based on RS and GIS
Abstract:
Based on the remote image data of land use in the four periods of 1980, 1995, 2005 and 2015, this study quantitatively analyzed the dynamic variation of landscape transfer and landscape pattern in the Fuxian Lake basin by constructing a land use dynamic variation model and using ArcGIS 10.5 and Fragstats 4.2. The results indicate that: (1) From the perspective of land use landscape transfer, the intensity of land use is slowly rising from 1980 to 2015, and the main reduction landscape type is farmland and its net amount of transfer-out is the most among all transfer-outs, which is to 788.85 hm2, the main added landscape type is construction land and its net amount of transfer-in is the most, which is to 475.23 hm2. Meanwhile, the land use landscape variation in the stage of 2005-2015 showed the most severe among three periods when compared with other two stages. (2) From the perspective of land use landscape variation, significant spatial differences are shown, the changes in the north of the basin are significantly higher than that in the south, the west coast are apparently higher than the east. (3) From the perspective of landscape pattern index, the number of plaques is on the increase in the periods of 35 years in the basin, and there is little mutual interference between landscape patterns because the plaques are relatively discrete. Cultivated land showed a trend of fragmentation but constructive land showed trend of relative concentration. The sustainable development and biodiversity in this basin are under threat for the fragmented landscape pattern and the poorer connectivity.
Keywords: Land use, landscape pattern evolution, landscape pattern index, Fuxian Lake basin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5837726 A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics
Authors: Douglas A. Menezes, Isabel D. Nunes, Ulrich Schiel
Abstract:
Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.
Keywords: Educational data visualization, high-level petri nets, instructional design, learning analytics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8457725 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues
Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid
Abstract:
New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.Keywords: Information visualization, visual analytics, text mining, visual text analytics tools, big data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10017724 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: Customer relationship management, churn prediction, telecom industry, deep learning, Artificial Neural Networks, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7597723 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete
Authors: Devendra Kumar Pandey, Debabrata Chakraborty
Abstract:
The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.
Keywords: High performance concrete, special concrete, structural design, structural lightweight concrete.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9167722 A Technical Perspective on Roadway Safety in Eastern Province: Data Evaluation and Spatial Analysis
Authors: Muhammad Farhan, Sayed Faruque, Amr Mohammed, Sami Osman, Omar Al-Jabari, Abdul Almojil
Abstract:
Saudi Arabia in recent years has seen drastic increase in traffic related crashes. With population of over 29 million, Saudi Arabia is considered as a fast growing and emerging economy. The rapid population increase and economic growth has resulted in rapid expansion of transportation infrastructure, which has led to increase in road crashes. Saudi Ministry of Interior reported more than 7,000 people killed and 68,000 injured in 2011 ranking Saudi Arabia to be one of the worst worldwide in traffic safety. The traffic safety issues in the country also result in distress to road users and cause and economic loss exceeding 3.7 billion Euros annually. Keeping this in view, the researchers in Saudi Arabia are investigating ways to improve traffic safety conditions in the country. This paper presents a multilevel approach to collect traffic safety related data required to do traffic safety studies in the region. Two highway corridors including King Fahd Highway 39 kilometre and Gulf Cooperation Council Highway 42 kilometre long connecting the cities of Dammam and Khobar were selected as a study area. Traffic data collected included traffic counts, crash data, travel time data, and speed data. The collected data was analysed using geographic information system to evaluate any correlation. Further research is needed to investigate the effectiveness of traffic safety related data when collected in a concerted effort.
Keywords: Crash Data, Data Collection, Traffic Safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23507721 Machine Scoring Model Using Data Mining Techniques
Authors: Wimalin S. Laosiritaworn, Pongsak Holimchayachotikul
Abstract:
this article proposed a methodology for computer numerical control (CNC) machine scoring. The case study company is a manufacturer of hard disk drive parts in Thailand. In this company, sample of parts manufactured from CNC machine are usually taken randomly for quality inspection. These inspection data were used to make a decision to shut down the machine if it has tendency to produce parts that are out of specification. Large amount of data are produced in this process and data mining could be very useful technique in analyzing them. In this research, data mining techniques were used to construct a machine scoring model called 'machine priority assessment model (MPAM)'. This model helps to ensure that the machine with higher risk of producing defective parts be inspected before those with lower risk. If the defective prone machine is identified sooner, defective part and rework could be reduced hence improving the overall productivity. The results showed that the proposed method can be successfully implemented and approximately 351,000 baht of opportunity cost could have saved in the case study company.Keywords: Computer Numerical Control, Data Mining, HardDisk Drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13947720 The Impact of Seasonality on Rainfall Patterns: A Case Study
Authors: Priti Kaushik, Randhir Singh Baghel, Somil Khandelwal
Abstract:
This study uses whole-year data from Rajasthan, India, at the meteorological divisional level to analyze and evaluate long-term spatiotemporal trends in rainfall and looked at the data from each of the thirteen tehsils in the Jaipur district to see how the rainfall pattern has altered over the last 10 years. Data on daily rainfall from the Indian Meteorological Department (IMD) in Jaipur are available for the years 2012 through 2021. We mainly focus on comparing data of tehsil wise in the Jaipur district, Rajasthan, India. Also analyzed is the fact that July and August always see higher rainfall than any other month. Rainfall usually starts to rise around week 25th and peaks in weeks 32nd or 33rd. They showed that on several occasions, 2017 saw the least amount of rainfall during a long span of 10 years. The greatest rain fell between 2012 and 2021 in 2013, 2019, and 2020.
Keywords: Data analysis, extreme events, rainfall, descriptive case studies, precipitation temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887719 Enhance the Power of Sentiment Analysis
Authors: Yu Zhang, Pedro Desouza
Abstract:
Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modeling and testing work was done in R and Greenplum in-database analytic tools.
Keywords: Sentiment Analysis, Social Media, Twitter, Amazon, Data Mining, Machine Learning, Text Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35167718 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: Recurrent Neural Network, players lineup, basketball data, decision making model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8267717 New Multisensor Data Fusion Method Based on Probabilistic Grids Representation
Authors: Zhichao Zhao, Yi Liu, Shunping Xiao
Abstract:
A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Keywords: Cramer-Rao lower bound (CRLB), data fusion, probabilistic grids, joint probability density matrix, localization, sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18027716 Sampled-Data Model Predictive Tracking Control for Mobile Robot
Authors: Wookyong Kwon, Sangmoon Lee
Abstract:
In this paper, a sampled-data model predictive tracking control method is presented for mobile robots which is modeled as constrained continuous-time linear parameter varying (LPV) systems. The presented sampled-data predictive controller is designed by linear matrix inequality approach. Based on the input delay approach, a controller design condition is derived by constructing a new Lyapunov function. Finally, a numerical example is given to demonstrate the effectiveness of the presented method.Keywords: Model predictive control, sampled-data control, linear parameter varying systems, LPV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12757715 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches
Authors: Wuttigrai Ngamsirijit
Abstract:
Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.Keywords: Decision making, human capital analytics, talent management, talent value chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9657714 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions
Authors: Hazem M. El-Bakry
Abstract:
In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18137713 Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance
Authors: S. Deelers, S. Auwatanamongkol
Abstract:
In this paper, we propose an algorithm to compute initial cluster centers for K-means clustering. Data in a cell is partitioned using a cutting plane that divides cell in two smaller cells. The plane is perpendicular to the data axis with the highest variance and is designed to reduce the sum squared errors of the two cells as much as possible, while at the same time keep the two cells far apart as possible. Cells are partitioned one at a time until the number of cells equals to the predefined number of clusters, K. The centers of the K cells become the initial cluster centers for K-means. The experimental results suggest that the proposed algorithm is effective, converge to better clustering results than those of the random initialization method. The research also indicated the proposed algorithm would greatly improve the likelihood of every cluster containing some data in it.Keywords: Clustering algorithm, K-means algorithm, Datapartitioning, Initial cluster centers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28657712 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions
Authors: Hazem M. El-Bakry
Abstract:
In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20697711 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
Authors: Jindong Gu, Matthias Schubert, Volker Tresp
Abstract:
In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.Keywords: Outlier detection, generative adversary networks, semi-supervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10727710 Methodology of the Turkey’s National Geographic Information System Integration Project
Authors: Buse A. Ataç, Doğan K. Cenan, Arda Çetinkaya, Naz D. Şahin, Köksal Sanlı, Zeynep Koç, Akın Kısa
Abstract:
With its spatial data reliability, interpretation and questioning capabilities, Geographical Information Systems make significant contributions to scientists, planners and practitioners. Geographic information systems have received great attention in today's digital world, growing rapidly, and increasing the efficiency of use. Access to and use of current and accurate geographical data, which are the most important components of the Geographical Information System, has become a necessity rather than a need for sustainable and economic development. This project aims to enable sharing of data collected by public institutions and organizations on a web-based platform. Within the scope of the project, INSPIRE (Infrastructure for Spatial Information in the European Community) data specifications are considered as a road-map. In this context, Turkey's National Geographic Information System (TUCBS) Integration Project supports sharing spatial data within 61 pilot public institutions as complied with defined national standards. In this paper, which is prepared by the project team members in the TUCBS Integration Project, the technical process with a detailed methodology is explained. In this context, the main technical processes of the Project consist of Geographic Data Analysis, Geographic Data Harmonization (Standardization), Web Service Creation (WMS, WFS) and Metadata Creation-Publication. In this paper, the integration process carried out to provide the data produced by 61 institutions to be shared from the National Geographic Data Portal (GEOPORTAL), have been trying to be conveyed with a detailed methodology.
Keywords: Data specification, geoportal, GIS, INSPIRE, TUCBS, Turkey’s National Geographic Information System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6927709 Exploring SSD Suitable Allocation Schemes Incompliance with Workload Patterns
Authors: Jae Young Park, Hwansu Jung, Jong Tae Kim
Abstract:
In the Solid-State-Drive (SSD) performance, whether the data has been well parallelized is an important factor. SSD parallelization is affected by allocation scheme and it is directly connected to SSD performance. There are dynamic allocation and static allocation in representative allocation schemes. Dynamic allocation is more adaptive in exploiting write operation parallelism, while static allocation is better in read operation parallelism. Therefore, it is hard to select the appropriate allocation scheme when the workload is mixed read and write operations. We simulated conditions on a few mixed data patterns and analyzed the results to help the right choice for better performance. As the results, if data arrival interval is long enough prior operations to be finished and continuous read intensive data environment static allocation is more suitable. Dynamic allocation performs the best on write performance and random data patterns.
Keywords: Dynamic allocation, NAND Flash based SSD, SSD parallelism, static allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19927708 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-Time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method as a Web-App is developed for auto-generated data replication to provide a twin of the targeted data structure. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi", has been developed. A special login form has been developed with a special instance of the data validation; this verification process secures the web application from its early stages. The system has been tested and validated, and up to 99% of SQLi attacks have been prevented.
Keywords: SQL injection, attacks, web application, accuracy, database, WebAppShield.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4427707 Adaptive Kernel Principal Analysis for Online Feature Extraction
Authors: Mingtao Ding, Zheng Tian, Haixia Xu
Abstract:
The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15507706 Power Ultrasound Application on Convective Drying of Banana (Musa paradisiaca), Mango (Mangifera indica L.) and Guava (Psidium guajava L.)
Authors: Erika K. Méndez, Carlos E. Orrego, Diana L. Manrique, Juan D. Gonzalez, Doménica Vallejo
Abstract:
High moisture content in fruits generates post-harvest problems such as mechanical, biochemical, microbial and physical losses. Dehydration, which is based on the reduction of water activity of the fruit, is a common option for overcoming such losses. However, regular hot air drying could affect negatively the quality properties of the fruit due to the long residence time at high temperature. Power ultrasound (US) application during the convective drying has been used as a novel method able to enhance drying rate and, consequently, to decrease drying time. In the present study, a new approach was tested to evaluate the effect of US on the drying time, the final antioxidant activity (AA) and the total polyphenol content (TPC) of banana slices (BS), mango slices (MS) and guava slices (GS). There were also studied the drying kinetics with nine different models from which water effective diffusivities (Deff) (with or without shrinkage corrections) were calculated. Compared with the corresponding control tests, US assisted drying for fruit slices showed reductions in drying time between 16.23 and 30.19%, 11.34 and 32.73%, and 19.25 and 47.51% for the MS, BS and GS respectively. Considering shrinkage effects, Deff calculated values ranged from 1.67*10-10 to 3.18*10-10 m2/s, 3.96*10-10 and 5.57*10-10 m2/s and 4.61*10-10 to 8.16*10-10 m2/s for the BS, MS and GS samples respectively. Reductions of TPC and AA (as DPPH) were observed compared with the original content in fresh fruit data in all kinds of drying assays.Keywords: Banana, drying, effective diffusivity, guava, mango, ultrasound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24217705 Proposing an Efficient Method for Frequent Pattern Mining
Authors: Vaibhav Kant Singh, Vijay Shah, Yogendra Kumar Jain, Anupam Shukla, A.S. Thoke, Vinay KumarSingh, Chhaya Dule, Vivek Parganiha
Abstract:
Data mining, which is the exploration of knowledge from the large set of data, generated as a result of the various data processing activities. Frequent Pattern Mining is a very important task in data mining. The previous approaches applied to generate frequent set generally adopt candidate generation and pruning techniques for the satisfaction of the desired objective. This paper shows how the different approaches achieve the objective of frequent mining along with the complexities required to perform the job. This paper will also look for hardware approach of cache coherence to improve efficiency of the above process. The process of data mining is helpful in generation of support systems that can help in Management, Bioinformatics, Biotechnology, Medical Science, Statistics, Mathematics, Banking, Networking and other Computer related applications. This paper proposes the use of both upward and downward closure property for the extraction of frequent item sets which reduces the total number of scans required for the generation of Candidate Sets.Keywords: Data Mining, Candidate Sets, Frequent Item set, Pruning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16817704 Design Transformation to Reduce Cost in Irrigation Using Value Engineering
Authors: F. S. Al-Anzi, M. Sarfraz, A. Elmi, A. R. Khan
Abstract:
Researchers are responding to the environmental challenges of Kuwait in localized, innovative, effective and economic ways. One of the vital and significant examples of the natural challenges is lack or water and desertification. In this research, the project team focuses on redesigning a prototype, using Value Engineering Methodology, which would provide similar functionalities to the well-known technology of Waterboxx kits while reducing the capital and operational costs and simplifying the process of manufacturing and usability by regular farmers. The design employs used tires and recycled plastic sheets as raw materials. Hence, this approach is going to help not just fighting desertification but also helping in getting rid of ever growing huge tire dumpsters in Kuwait, as well as helping in avoiding hazards of tire fires yielding in a safer and friendlier environment. Several alternatives for implementing the prototype have been considered. The best alternative in terms of value has been selected after thorough Function Analysis System Technique (FAST) exercise has been developed. A prototype has been fabricated and tested in a controlled simulated lab environment that is being followed by real environment field testing. Water and soil analysis conducted on the site of the experiment to cross compare between the composition of the soil before and after the experiment to insure that the prototype being tested is actually going to be environment safe. Experimentation shows that the design was equally as effective as, and may exceed, the original design with significant savings in cost. An estimated total cost reduction using the VE approach of 43.84% over the original design. This cost reduction does not consider the intangible costs of environmental issue of waste recycling which many further intensify the total savings of using the alternative VE design. This case study shows that Value Engineering Methodology can be an important tool in innovating new designs for reducing costs.
Keywords: Desertification, functional analysis, scrap tires, value engineering, waste recycling, water irrigation rationing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14607703 Danger Theory and Intelligent Data Processing
Authors: Anjum Iqbal, Mohd Aizaini Maarof
Abstract:
Artificial Immune System (AIS) is relatively naive paradigm for intelligent computations. The inspiration for AIS is derived from natural Immune System (IS). Classically it is believed that IS strives to discriminate between self and non-self. Most of the existing AIS research is based on this approach. Danger Theory (DT) argues this approach and proposes that IS fights against danger producing elements and tolerates others. We, the computational researchers, are not concerned with the arguments among immunologists but try to extract from it novel abstractions for intelligent computation. This paper aims to follow DT inspiration for intelligent data processing. The approach may introduce new avenue in intelligent processing. The data used is system calls data that is potentially significant in intrusion detection applications.Keywords: artificial immune system, danger theory, intelligent processing, system calls
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18827702 Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well
Authors: Zahra Ghadampour, Gholamreza Rakhshandehroo
Abstract:
A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.Keywords: Neural networks, groundwater depth, forecast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514