Search results for: HarmonySearch Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3422

Search results for: HarmonySearch Algorithm

1652 Nine-Level Shunt Active Power Filter Associated with a Photovoltaic Array Coupled to the Electrical Distribution Network

Authors: Zahzouh Zoubir, Bouzaouit Azzeddine, Gahgah Mounir

Abstract:

The use of more and more electronic power switches with a nonlinear behavior generates non-sinusoidal currents in distribution networks, which causes damage to domestic and industrial equipment. The multi-level shunt power active filter is subsequently shown to be an adequate solution to the problem raised. Nevertheless, the difficulty of adjusting the active filter DC supply voltage requires another technology to ensure it. In this article, a photovoltaic generator is associated with the DC bus power terminals of the active filter. The proposed system consists of a field of solar panels, three multi-level voltage inverters connected to the power grid and a non-linear load consisting of a six-diode rectifier bridge supplying a resistive-inductive load. Current control techniques of active and reactive power are used to compensate for both harmonic currents and reactive power as well as to inject active solar power into the distribution network. An algorithm of the search method of the maximum power point of type Perturb and observe is applied. Simulation results of the system proposed under the MATLAB/Simulink environment shows that the performance of control commands that reassure the solar power injection in the network, harmonic current compensation and power factor correction.

Keywords: MPPT, active power filter, PV array, perturb and observe algorithm, PWM-control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
1651 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells

Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser

Abstract:

Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.

Keywords: Cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
1650 Using Data Mining Techniques for Finding Cardiac Outlier Patients

Authors: Farhan Ismaeel Dakheel, Raoof Smko, K. Negrat, Abdelsalam Almarimi

Abstract:

In this paper we used data mining techniques to identify outlier patients who are using large amount of drugs over a long period of time. Any healthcare or health insurance system should deal with the quantities of drugs utilized by chronic diseases patients. In Kingdom of Bahrain, about 20% of health budget is spent on medications. For the managers of healthcare systems, there is no enough information about the ways of drug utilization by chronic diseases patients, is there any misuse or is there outliers patients. In this work, which has been done in cooperation with information department in the Bahrain Defence Force hospital; we select the data for Cardiac patients in the period starting from 1/1/2008 to December 31/12/2008 to be the data for the model in this paper. We used three techniques for finding the drug utilization for cardiac patients. First we applied a clustering technique, followed by measuring of clustering validity, and finally we applied a decision tree as classification algorithm. The clustering results is divided into three clusters according to the drug utilization, for 1603 patients, who received 15,806 prescriptions during this period can be partitioned into three groups, where 23 patients (2.59%) who received 1316 prescriptions (8.32%) are classified to be outliers. The classification algorithm shows that the use of average drug utilization and the age, and the gender of the patient can be considered to be the main predictive factors in the induced model.

Keywords: Data Mining, Clustering, Classification, Drug Utilization..

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
1649 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: Computational social science, movie preference, machine learning, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
1648 An ACO Based Algorithm for Distribution Networks Including Dispersed Generations

Authors: B. Bahmani Firouzi, T. Niknam, M. Nayeripour

Abstract:

With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.

Keywords: Distributed Generation, Optimal Operation Management of distribution networks, Ant Colony Optimization(ACO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
1647 A Text Mining Technique Using Association Rules Extraction

Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey

Abstract:

This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.

Keywords: Text mining, data mining, association rule mining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4444
1646 Very-high-Precision Normalized Eigenfunctions for a Class of Schrödinger Type Equations

Authors: Amna Noreen , Kare Olaussen

Abstract:

We demonstrate that it is possible to compute wave function normalization constants for a class of Schr¨odinger type equations by an algorithm which scales linearly (in the number of eigenfunction evaluations) with the desired precision P in decimals.

Keywords: Eigenvalue problems, bound states, trapezoidal rule, poisson resummation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2859
1645 Persian/Arabic Document Segmentation Based On Pyramidal Image Structure

Authors: Seyyed Yasser Hashemi, Khalil Monfaredi

Abstract:

Automatic transformation of paper documents into electronic documents requires document segmentation at the first stage. However, some parameters restrictions such as variations in character font sizes, different text line spacing, and also not uniform document layout structures altogether have made it difficult to design a general-purpose document layout analysis algorithm for many years. Thus in most previously reported methods it is inevitable to include these parameters. This problem becomes excessively acute and severe, especially in Persian/Arabic documents. Since the Persian/Arabic scripts differ considerably from the English scripts, most of the proposed methods for the English scripts do not render good results for the Persian scripts. In this paper, we present a novel parameter-free method for segmenting the Persian/Arabic document images which also works well for English scripts. This method segments the document image into maximal homogeneous regions and identifies them as texts and non-texts based on a pyramidal image structure. In other words the proposed method is capable of document segmentation without considering the character font sizes, text line spacing, and document layout structures. This algorithm is examined for 150 Arabic/Persian and English documents and document segmentation process are done successfully for 96 percent of documents.

Keywords: Persian/Arabic document, document segmentation, Pyramidal Image Structure, skew detection and correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
1644 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
1643 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning

Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan

Abstract:

We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.

Keywords: Daily activity recognition, healthcare, IoT sensors, transfer learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
1642 A New Variant of RC4 Stream Cipher

Authors: Lae Lae Khine

Abstract:

RC4 was used as an encryption algorithm in WEP(Wired Equivalent Privacy) protocol that is a standardized for 802.11 wireless network. A few attacks followed, indicating certain weakness in the design. In this paper, we proposed a new variant of RC4 stream cipher. The new version of the cipher does not only appear to be more secure, but its keystream also has large period, large complexity and good statistical properties.

Keywords: Cryptography, New variant, RC4, Stream Cipher.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
1641 A Modular On-line Profit Sharing Approach in Multiagent Domains

Authors: Pucheng Zhou, Bingrong Hong

Abstract:

How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.

Keywords: Multi-agent learning; reinforcement learning; rationalprofit sharing; modular architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
1640 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1380
1639 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
1638 Video Matting based on Background Estimation

Authors: J.-H. Moon, D.-O Kim, R.-H. Park

Abstract:

This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.

Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
1637 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
1636 Automatic Road Network Recognition and Extraction for Urban Planning

Authors: D. B. L. Bong, K.C. Lai, A. Joseph

Abstract:

The uses of road map in daily activities are numerous but it is a hassle to construct and update a road map whenever there are changes. In Universiti Malaysia Sarawak, research on Automatic Road Extraction (ARE) was explored to solve the difficulties in updating road map. The research started with using Satellite Image (SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm was developed to extract roads automatically from satellite-taken images. In order to extract the road network accurately, the satellite image must be analyzed prior to the extraction process. The characteristics of these elements are analyzed and consequently the relationships among them are determined. In this study, the road regions are extracted based on colour space elements and edge details of roads. Besides, edge detection method is applied to further filter out the non-road regions. The extracted road regions are validated by using a segmentation method. These results are valuable for building road map and detecting the changes of the existing road database. The proposed Hybrid Simple Colour Space Segmentation and Edge Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks fully automatic, where the user only needs to input a high-resolution satellite image and wait for the result. Moreover, this system can work on complex road network and generate the extraction result in seconds.

Keywords: Road Network Recognition, Colour Space, Edge Detection, Urban Planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2997
1635 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1019
1634 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller

Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3168
1633 Game Theory Based Diligent Energy Utilization Algorithm for Routing in Wireless Sensor Network

Authors: X. Mercilin Raajini, R. Raja Kumar, P. Indumathi, V. Praveen

Abstract:

Many cluster based routing protocols have been proposed in the field of wireless sensor networks, in which a group of nodes are formed as clusters. A cluster head is selected from one among those nodes based on residual energy, coverage area, number of hops and that cluster-head will perform data gathering from various sensor nodes and forwards aggregated data to the base station or to a relay node (another cluster-head), which will forward the packet along with its own data packet to the base station. Here a Game Theory based Diligent Energy Utilization Algorithm (GTDEA) for routing is proposed. In GTDEA, the cluster head selection is done with the help of game theory, a decision making process, that selects a cluster-head based on three parameters such as residual energy (RE), Received Signal Strength Index (RSSI) and Packet Reception Rate (PRR). Finding a feasible path to the destination with minimum utilization of available energy improves the network lifetime and is achieved by the proposed approach. In GTDEA, the packets are forwarded to the base station using inter-cluster routing technique, which will further forward it to the base station. Simulation results reveal that GTDEA improves the network performance in terms of throughput, lifetime, and power consumption.

Keywords: Cluster head, Energy utilization, Game Theory, LEACH, Sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
1632 Sperm Identification Using Elliptic Model and Tail Detection

Authors: Vahid Reza Nafisi, Mohammad Hasan Moradi, Mohammad Hosain Nasr-Esfahani

Abstract:

The conventional assessment of human semen is a highly subjective assessment, with considerable intra- and interlaboratory variability. Computer-Assisted Sperm Analysis (CASA) systems provide a rapid and automated assessment of the sperm characteristics, together with improved standardization and quality control. However, the outcome of CASA systems is sensitive to the method of experimentation. While conventional CASA systems use digital microscopes with phase-contrast accessories, producing higher contrast images, we have used raw semen samples (no staining materials) and a regular light microscope, with a digital camera directly attached to its eyepiece, to insure cost benefits and simple assembling of the system. However, since the accurate finding of sperms in the semen image is the first step in the examination and analysis of the semen, any error in this step can affect the outcome of the analysis. This article introduces and explains an algorithm for finding sperms in low contrast images: First, an image enhancement algorithm is applied to remove extra particles from the image. Then, the foreground particles (including sperms and round cells) are segmented form the background. Finally, based on certain features and criteria, sperms are separated from other cells.

Keywords: Computer-Assisted Sperm Analysis (CASA), Sperm identification, Tail detection, Elliptic shape model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
1631 Integrating Computational Intelligence Techniques and Assessment Agents in ELearning Environments

Authors: Konstantinos C. Giotopoulos, Christos E. Alexakos, Grigorios N. Beligiannis, Spiridon D.Likothanassis

Abstract:

In this contribution an innovative platform is being presented that integrates intelligent agents and evolutionary computation techniques in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting: I) various assessment agents for e-learning environments, II) a specific resource retrieval agent for the provision of additional information from Internet sources matching the needs and profile of the specific user and III) a genetic algorithm designed to extract efficient information (classifying rules) based on the students- answering input data. The agents are implemented in order to provide intelligent assessment services based on computational intelligence techniques such as Bayesian Networks and Genetic Algorithms. The proposed Genetic Algorithm (GA) is used in order to extract efficient information (classifying rules) based on the students- answering input data. The idea of using a GA in order to fulfil this difficult task came from the fact that GAs have been widely used in applications including classification of unknown data. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.

Keywords: Bayesian Networks, Computational Intelligencetechniques, E-learning legacy systems, Service Oriented Integration, Intelligent Agents, Genetic Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
1630 An Experimental Procedure for Design and Construction of Monocopter and Its Control Using Optical and GPS-Aided AHRS Sensors

Authors: A. Safaee, M. S. Mehrabani, M. B. Menhaj, V. Mousavi, S. Z. Moussavi

Abstract:

Monocopter is a single-wing rotary flying vehicle which has the capability of hovering. This flying vehicle includes two dynamic parts in which more efficiency can be expected rather than other Micro UAVs due to the extended area of wing compared to its fuselage. Low cost and simple mechanism in comparison to other vehicles such as helicopter are the most important specifications of this flying vehicle. In the previous paper we discussed the introduction of the final system but in this paper, the experimental design process of Monocopter and its control algorithm has been investigated in general. Also the editorial bugs in the previous article have been corrected and some translational ambiguities have been resolved. Initially by constructing several prototypes and carrying out many flight tests the main design parameters of this air vehicle were obtained by experimental measurements. Eventually the required main monocopter for this project was constructed. After construction of the monocopter in order to design, implementation and testing of control algorithms first a simple optic system used for determining the heading angle. After doing numerous tests on Test Stand, the control algorithm designed and timing of applying control inputs adjusted. Then other control parameters of system were tuned in flight tests. Eventually the final control system designed and implemented using the AHRS sensor and the final operational tests performed successfully.

Keywords: Monocopter, Flap, Heading Angle, AHRS, Cyclic, Photo Diode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3442
1629 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm

Authors: Sundara Subramanian Karuppasamy, Che Hua Yang

Abstract:

In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.

Keywords: Laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 965
1628 Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

Authors: Sakshi Kausha, R.K Sharma

Abstract:

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

Keywords: Buffer Management, Consecutive packet loss, Quality-of-Service, Priority based packet discarding, partial buffersharing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
1627 A Neurofuzzy Learning and its Application to Control System

Authors: Seema Chopra, R. Mitra, Vijay Kumar

Abstract:

A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.

Keywords: Fuzzy control, neuro-fuzzy techniques, fuzzy subtractive clustering, extraction of rules, and optimization of membership functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2598
1626 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: Balance sheet indicators, Bancassurance, business models, ward algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
1625 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2780
1624 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738
1623 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 555