Search results for: Distributed Data Mining
7703 Modified PSO Based Optimal Control for Maximizing Benefits of Distributed Generation System
Authors: Priyanka Sen, Kaibalya Prasad Panda, Soumyakanta Samantaray, Sreyasee Rout, Bishnupriya Biswal
Abstract:
Deregulation in the power system industry and the invention of new technologies for producing electrical energy has led to innovations in power system planning. Distributed generation (DG) is one of the most attractive technologies that bring different kinds of advantages to a lot of entities, engaged in power systems. In this paper, a model for considering DGs in the power system planning problem is presented. Dynamic power system planning for reduction of maintenance and operational cost is presented in this paper. In addition to that, a modified particle swarm optimization (PSO) is used to find the optimal topology solution. Voltage Profile Improvement Index (VPII) and Line Loss Reduction Index (LLRI) are taken as benefit index of employing DG. The effectiveness of this method is demonstrated through examination of IEEE 30 bus test system.
Keywords: Distributed generation, line loss reduction index, particle swarm optimization, power system, voltage profile improvement index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9137702 Operational risks Classification for Information Systems with Service-Oriented Architecture (Including Loss Calculation Example)
Authors: Irina Pyrlina
Abstract:
This article presents the results of a study conducted to identify operational risks for information systems (IS) with service-oriented architecture (SOA). Analysis of current approaches to risk and system error classifications revealed that the system error classes were never used for SOA risk estimation. Additionally system error classes are not normallyexperimentally supported with realenterprise error data. Through the study several categories of various existing error classifications systems are applied and three new error categories with sub-categories are identified. As a part of operational risks a new error classification scheme is proposed for SOA applications. It is based on errors of real information systems which are service providers for application with service-oriented architecture. The proposed classification approach has been used to classify SOA system errors for two different enterprises (oil and gas industry, metal and mining industry). In addition we have conducted a research to identify possible losses from operational risks.
Keywords: Enterprise architecture, Error classification, Oil&Gas and Metal&Mining industries, Operational risks, Serviceoriented architecture
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16047701 A Research on DC Voltage Offsets Generated by PWM-Controlled Inverters
Authors: Marios N. Moschakis
Abstract:
The increasing penetration of Distributed Generation and storage connected to the distribution network via PWM converters increases the possibility of a DC-component (offset) in voltage or current flowing into the grid. This occurs when even harmonics are present in the network voltage. DC-components can affect the operation and safety of several grid components. Therefore, an investigation of the way they are produced is important in order to take appropriate measures for their elimination. Further research on DC-components that appear on output voltage of converters is performed for different parameters of PWM technique and characteristics of even harmonics.
Keywords: Asymmetric even harmonics, DC-offsets, distributed generation, electric machine drive systems, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36857700 A High Performance MPI for Parallel and Distributed Computing
Authors: Prabu D., Vanamala V., Sanjeeb Kumar Deka, Sridharan R., Prahlada Rao B. B., Mohanram N.
Abstract:
Message Passing Interface is widely used for Parallel and Distributed Computing. MPICH and LAM are popular open source MPIs available to the parallel computing community also there are commercial MPIs, which performs better than MPICH etc. In this paper, we discuss a commercial Message Passing Interface, CMPI (C-DAC Message Passing Interface). C-MPI is an optimized MPI for CLUMPS. It is found to be faster and more robust compared to MPICH. We have compared performance of C-MPI and MPICH on Gigabit Ethernet network.Keywords: C-MPI, C-VIA, HPC, MPICH, P-COMS, PMB
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15577699 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm
Authors: Chen Wu, Jingyu Yang
Abstract:
Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12887698 An Analysis of Real-Time Distributed System under Different Priority Policies
Authors: Y. Jayanta Singh, Suresh C. Mehrotra
Abstract:
A real time distributed computing has heterogeneously networked computers to solve a single problem. So coordination of activities among computers is a complex task and deadlines make more complex. The performances depend on many factors such as traffic workloads, database system architecture, underlying processors, disks speeds, etc. Simulation study have been performed to analyze the performance under different transaction scheduling: different workloads, arrival rate, priority policies, altering slack factors and Preemptive Policy. The performance metric of the experiments is missed percent that is the percentage of transaction that the system is unable to complete. The throughput of the system is depends on the arrival rate of transaction. The performance can be enhanced with altering the slack factor value. Working on slack value for the transaction can helps to avoid some of transactions from killing or aborts. Under the Preemptive Policy, many extra executions of new transactions can be carried out.Keywords: Real distributed systems, slack factors, transaction scheduling, priority policies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16267697 A Scalable Media Job Framework for an Open Source Search Engine
Authors: Pooja Mishra, Chris Pollett
Abstract:
This paper explores efficient ways to implement various media-updating features like news aggregation, video conversion, and bulk email handling. All of these jobs share the property that they are periodic in nature, and they all benefit from being handled in a distributed fashion. The data for these jobs also often comes from a social or collaborative source. We isolate the class of periodic, one round map reduce jobs as a useful setting to describe and handle media updating tasks. As such tasks are simpler than general map reduce jobs, programming them in a general map reduce platform could easily become tedious. This paper presents a MediaUpdater module of the Yioop Open Source Search Engine Web Portal designed to handle such jobs via an extension of a PHP class. We describe how to implement various media-updating tasks in our system as well as experiments carried out using these implementations on an Amazon Web Services cluster.Keywords: Distributed jobs framework, news aggregation, video conversion, email.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10307696 Plant Varieties Selection System
Authors: Kitti Koonsanit, Chuleerat Jaruskulchai, Poonsak Miphokasap, Apisit Eiumnoh
Abstract:
In the end of the day, meteorological data and environmental data becomes widely used such as plant varieties selection system. Variety plant selection for planted area is of almost importance for all crops, including varieties of sugarcane. Since sugarcane have many varieties. Variety plant non selection for planting may not be adapted to the climate or soil conditions for planted area. Poor growth, bloom drop, poor fruit, and low price are to be from varieties which were not recommended for those planted area. This paper presents plant varieties selection system for planted areas in Thailand from meteorological data and environmental data by the use of decision tree techniques. With this software developed as an environmental data analysis tool, it can analyze resulting easier and faster. Our software is a front end of WEKA that provides fundamental data mining functions such as classify, clustering, and analysis functions. It also supports pre-processing, analysis, and decision tree output with exporting result. After that, our software can export and display data result to Google maps API in order to display result and plot plant icons effectively.
Keywords: Plant varieties selection system, decision tree, expert recommendation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17937695 On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation
Authors: Xiaohua Liu, Juan F. Beltran, Nishant Mohanchandra, Godfried T. Toussaint
Abstract:
Support vector machines (SVMs) are considered to be the best machine learning algorithms for minimizing the predictive probability of misclassification. However, their drawback is that for large data sets the computation of the optimal decision boundary is a time consuming function of the size of the training set. Hence several methods have been proposed to speed up the SVM algorithm. Here three methods used to speed up the computation of the SVM classifiers are compared experimentally using a musical genre classification problem. The simplest method pre-selects a random sample of the data before the application of the SVM algorithm. Two additional methods use proximity graphs to pre-select data that are near the decision boundary. One uses k-Nearest Neighbor graphs and the other Relative Neighborhood Graphs to accomplish the task.Keywords: Machine learning, data mining, support vector machines, proximity graphs, relative-neighborhood graphs, k-nearestneighbor graphs, random sampling, training data condensation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19197694 Finding an Optimized Discriminate Function for Internet Application Recognition
Authors: E. Khorram, S.M. Mirzababaei
Abstract:
Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Keywords: Data Mining, Firewall, Optimization, Packetclassification, Statistical Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14087693 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect
Authors: A. Kojah, A. Nacaroğlu
Abstract:
Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.
Keywords: Energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11687692 An Automatic Bayesian Classification System for File Format Selection
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.Keywords: Data mining, digital libraries, digital preservation, file format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16607691 Comparison of Adsorbents for Ammonia Removal from Mining Wastewater
Authors: Farooq A. Al-Sheikh, Carol Moralejo, Mark Pritzker, William A. Anderson, Ali Elkamel
Abstract:
Ammonia in mining wastewater is a significant problem, and treatment can be especially difficult in cold climates where biological treatment is not feasible. An adsorption process is one of the alternative processes that can be used to reduce ammonia concentrations to acceptable limits, and therefore a LEWATIT resin strongly acidic H+ form ion exchange resin and a Bowie Chabazite Na form AZLB-Na zeolite were tested to assess their effectiveness. For these adsorption tests, two packed bed columns (a mini-column constructed from a 32-cm long x 1-cm diameter piece of glass tubing, and a 60-cm long x 2.5-cm diameter Ace Glass chromatography column) were used containing varying quantities of the adsorbents. A mining wastewater with ammonia concentrations of 22.7 mg/L was fed through the columns at controlled flowrates. In the experimental work, maximum capacities of the LEWATIT ion exchange resin were 0.438, 0.448, and 1.472 mg/g for 3, 6, and 9 g respectively in a mini column and 1.739 mg/g for 141.5 g in a larger Ace column while the capacities for the AZLB-Na zeolite were 0.424, and 0.784 mg/g for 3, and 6 g respectively in the mini column and 1.1636 mg/g for 38.5 g in the Ace column. In the theoretical work, Thomas, Adams-Bohart, and Yoon-Nelson models were constructed to describe a breakthrough curve of the adsorption process and find the constants of the above-mentioned models. In the regeneration tests, 5% hydrochloric acid, HCl (v/v) and 10% sodium hydroxide, NaOH (w/v) were used to regenerate the LEWATIT resin and AZLB-Na zeolite with 44 and 63.8% recovery, respectively. In conclusion, continuous flow adsorption using a LEWATIT ion exchange resin and an AZLB-Na zeolite is efficient when using a co-flow technique for removal of the ammonia from wastewater. Thomas, Adams-Bohart, and Yoon-Nelson models satisfactorily fit the data with R2 closer to 1 in all cases.
Keywords: AZLB-Na zeolite, continuous adsorption, LEWATIT resin, models, regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12367690 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14917689 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line
Abstract:
Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.Keywords: Computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18347688 Customer Churn Prediction: A Cognitive Approach
Authors: Damith Senanayake, Lakmal Muthugama, Laksheen Mendis, Tiroshan Madushanka
Abstract:
Customer churn prediction is one of the most useful areas of study in customer analytics. Due to the enormous amount of data available for such predictions, machine learning and data mining have been heavily used in this domain. There exist many machine learning algorithms directly applicable for the problem of customer churn prediction, and here, we attempt to experiment on a novel approach by using a cognitive learning based technique in an attempt to improve the results obtained by using a combination of supervised learning methods, with cognitive unsupervised learning methods.
Keywords: Growing Self Organizing Maps, Kernel Methods, Churn Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25597687 Influence of Distributed Generation on Congestion and LMP in Competitive Electricity Market
Authors: Durga Gautam, Mithulananthan Nadarajah
Abstract:
This paper presents the influence of distributed generation (DG) on congestion and locational marginal price (LMP) in an optimal power flow (OPF) based wholesale electricity market. The problem of optimal placement to manage congestion and reduce LMP is formulated for the objective of social welfare maximization. From competitive electricity market standpoint, DGs have great value when they reduce load in particular locations and at particular times when feeders are heavily loaded. The paper lies on the groundwork that solution to optimal mix of generation and transmission resources can be achieved by addressing congestion and corresponding LMP. Obtained as lagrangian multiplier associated with active power flow equation for each node, LMP gives the short run marginal cost (SRMC) of electricity. Specific grid locations are examined to study the influence of DG penetration on congestion and corresponding shadow prices. The influence of DG on congestion and locational marginal prices has been demonstrated in a modified IEEE 14 bus test system.
Keywords: Congestion management, distributed generation, electricity market, locational marginal price, optimal power flow, social welfare.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29187686 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7307685 A General Variable Neighborhood Search Algorithm to Minimize Makespan of the Distributed Permutation Flowshop Scheduling Problem
Authors: G. M. Komaki, S. Mobin, E. Teymourian, S. Sheikh
Abstract:
This paper addresses minimizing the makespan of the distributed permutation flow shop scheduling problem. In this problem, there are several parallel identical factories or flowshops each with series of similar machines. Each job should be allocated to one of the factories and all of the operations of the jobs should be performed in the allocated factory. This problem has recently gained attention and due to NP-Hard nature of the problem, metaheuristic algorithms have been proposed to tackle it. Majority of the proposed algorithms require large computational time which is the main drawback. In this study, a general variable neighborhood search algorithm (GVNS) is proposed where several time-saving schemes have been incorporated into it. Also, the GVNS uses the sophisticated method to change the shaking procedure or perturbation depending on the progress of the incumbent solution to prevent stagnation of the search. The performance of the proposed algorithm is compared to the state-of-the-art algorithms based on standard benchmark instances.Keywords: Distributed permutation flow shop, scheduling, makespan, general variable neighborhood search algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22717684 Single Phase 13-Level D-STATCOM Inverter with Distributed System
Authors: R. Kamalakannan, N. Ravi Kumar
Abstract:
The global energy consumption is increasing persistently and need for distributed power generation through renewable energy is essential. To meet the power requirements for consumers without any voltage fluctuations and losses, modeling and design of multilevel inverter with Flexible AC Transmission System (FACTS) capability is presented. The presented inverter is provided with 13-level cascaded H-bridge topology of Insulated Gate Bipolar Transistor (IGBTs) connected along with inbuilt Distributed Static Synchronous Compensators (DSTATCOM). The DSTATCOM device provides control of power factor stability at local feeder lines and the inverter eliminates Total Harmonic Distortion (THD). The 13-level inverter utilizes 52 switches of each H-bridge is fed with single DC sources separately and the Pulse Width Modulation (PWM) technique is used for switching IGBTs. The control strategy implemented for inverter transmits active power to grid as well as it maintains power factor to be stable with achievement of steady state power transmission. Significant outcome of this project is improvement of output voltage quality with steady state power transmission with low THD. Simulation of inverter with DSTATCOM is performed using MATLAB/Simulink environment. The scaled prototype model of proposed inverter is built and its results were validated with simulated results.Keywords: FACTS devices, distributed-Static synchronous compensators, DSTATCOM, total harmonics elimination, modular multilevel converter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14547683 Network Anomaly Detection using Soft Computing
Authors: Surat Srinoy, Werasak Kurutach, Witcha Chimphlee, Siriporn Chimphlee
Abstract:
One main drawback of intrusion detection system is the inability of detecting new attacks which do not have known signatures. In this paper we discuss an intrusion detection method that proposes independent component analysis (ICA) based feature selection heuristics and using rough fuzzy for clustering data. ICA is to separate these independent components (ICs) from the monitored variables. Rough set has to decrease the amount of data and get rid of redundancy and Fuzzy methods allow objects to belong to several clusters simultaneously, with different degrees of membership. Our approach allows us to recognize not only known attacks but also to detect activity that may be the result of a new, unknown attack. The experimental results on Knowledge Discovery and Data Mining- (KDDCup 1999) dataset.Keywords: Network security, intrusion detection, rough set, ICA, anomaly detection, independent component analysis, rough fuzzy .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19557682 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints
Authors: Safa Adi
Abstract:
This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.Keywords: Database, GTC algorithm, PSP algorithm, sequential patterns, time constraints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6997681 Response of the Residential Building Structureon Load Technical Seismicity due to Mining Activities
Authors: V. Salajka, Z. Kaláb, J. Kala, P. Hradil
Abstract:
In the territories where high-intensity earthquakes are frequent is paid attention to the solving of the seismic problems. In the paper are described two computational model variants based on finite element method of the construction with different subsoil simulation (rigid or elastic subsoil) is used. For simulation and calculations program system based on method final elements ANSYS was used. Seismic responses calculations of residential building structure were effected on loading characterized by accelerogram for comparing with the responses spectra method.Keywords: Accelerogram, ANSYS, mining induced seismic, residential building structure, spectra, subsoil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15387680 Wind Power Mapping and NPV of Embedded Generation Systems in Nigeria
Authors: Oluseyi O. Ajayi, Ohiose D. Ohijeagbon, Mercy Ogbonnaya, Ameh Attabo
Abstract:
The study assessed the potential and economic viability of stand-alone wind systems for embedded generation, taking into account its benefits to small off-grid rural communities at 40 meteorological sites in Nigeria. A specific electric load profile was developed to accommodate communities consisting of 200 homes, a school and a community health centre. This load profile was incorporated within the distributed generation analysis producing energy in the MW range, while optimally meeting daily load demand for the rural communities. Twenty-four years (1987 to 2010) of wind speed data at a height of 10m utilized for the study were sourced from the Nigeria Meteorological Department, Oshodi. The HOMER® software optimizing tool was engaged for the feasibility study and design. Each site was suited to 3MW wind turbines in sets of five, thus 15MW was designed for each site. This design configuration was adopted in order to easily compare the distributed generation system amongst the sites to determine their relative economic viability in terms of life cycle cost, as well as levelised cost of producing energy. A net present value was estimated in terms of life cycle cost for 25 of the 40 meteorological sites. On the other hand, the remaining sites yielded a net present cost; meaning the installations at these locations were not economically viable when utilizing the present tariff regime for embedded generation in Nigeria.
Keywords: Wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, Nigeria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14467679 Scalable Cloud-Based LEO Satellite Constellation Simulator
Authors: Karim Sobh, Khaled El-Ayat, Fady Morcos, Amr El-Kadi
Abstract:
Distributed applications deployed on LEO satellites and ground stations require substantial communication between different members in a constellation to overcome the earth coverage barriers imposed by GEOs. Applications running on LEO constellations suffer the earth line-of-sight blockage effect. They need adequate lab testing before launching to space. We propose a scalable cloud-based network simulation framework to simulate problems created by the earth line-of-sight blockage. The framework utilized cloud IaaS virtual machines to simulate LEO satellites and ground stations distributed software. A factorial ANOVA statistical analysis is conducted to measure simulator overhead on overall communication performance. The results showed a very low simulator communication overhead. Consequently, the simulation framework is proposed as a candidate for testing LEO constellations with distributed software in the lab before space launch.Keywords: LEO, Cloud Computing, Constellation, Satellite, Network Simulation, Netfilter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25747678 A Local Decisional Algorithm Using Agent- Based Management in Constrained Energy Environment
Authors: C. Adam, G. Henri, T. Levent, J-B Mauro, A-L Mayet
Abstract:
Energy Efficiency Management is the heart of a worldwide problem. The capability of a multi-agent system as a technology to manage the micro-grid operation has already been proved. This paper deals with the implementation of a decisional pattern applied to a multi-agent system which provides intelligence to a distributed local energy network considered at local consumer level. Development of multi-agent application involves agent specifications, analysis, design, and realization. Furthermore, it can be implemented by following several decisional patterns. The purpose of present article is to suggest a new approach for a decisional pattern involving a multi-agent system to control a distributed local energy network in a decentralized competitive system. The proposed solution is the result of a dichotomous approach based on environment observation. It uses an iterative process to solve automatic learning problems and converges monotonically very fast to system attracting operation point.
Keywords: Energy Efficiency Management, Distributed Smart- Grid, Multi-Agent System, Decisional Decentralized Competitive System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14137677 Improving Classification in Bayesian Networks using Structural Learning
Authors: Hong Choon Ong
Abstract:
Naïve Bayes classifiers are simple probabilistic classifiers. Classification extracts patterns by using data file with a set of labeled training examples and is currently one of the most significant areas in data mining. However, Naïve Bayes assumes the independence among the features. Structural learning among the features thus helps in the classification problem. In this study, the use of structural learning in Bayesian Network is proposed to be applied where there are relationships between the features when using the Naïve Bayes. The improvement in the classification using structural learning is shown if there exist relationship between the features or when they are not independent.Keywords: Bayesian Network, Classification, Naïve Bayes, Structural Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25997676 Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data
Authors: Phumelele P. Kubheka, Pius A. Owolawi, Gbolahan Aiyetoro
Abstract:
Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.
Keywords: Big data, latent Dirichlet allocation, latent semantic indexing, Telco, topic modeling, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4597675 Signal Reconstruction Using Cepstrum of Higher Order Statistics
Authors: Adnan Al-Smadi, Mahmoud Smadi
Abstract:
This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.
Keywords: Cepstrum, bicepstrum, third order statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20377674 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438