Search results for: Clustering Algorithms.
371 Development of NOx Emission Model for a Tangentially Fired Acid Incinerator
Authors: Elangeshwaran Pathmanathan, Rosdiazli Ibrahim, Vijanth Sagayan Asirvadam
Abstract:
This paper aims to develop a NOx emission model of an acid gas incinerator using Nelder-Mead least squares support vector regression (LS-SVR). Malaysia DOE is actively imposing the Clean Air Regulation to mandate the installation of analytical instrumentation known as Continuous Emission Monitoring System (CEMS) to report emission level online to DOE . As a hardware based analyzer, CEMS is expensive, maintenance intensive and often unreliable. Therefore, software predictive technique is often preferred and considered as a feasible alternative to replace the CEMS for regulatory compliance. The LS-SVR model is built based on the emissions from an acid gas incinerator that operates in a LNG Complex. Simulated Annealing (SA) is first used to determine the initial hyperparameters which are then further optimized based on the performance of the model using Nelder-Mead simplex algorithm. The LS-SVR model is shown to outperform a benchmark model based on backpropagation neural networks (BPNN) in both training and testing data.Keywords: artificial neural networks, industrial pollution, predictive algorithms, support vector machines
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975370 A new Heuristic Algorithm for the Dynamic Facility Layout Problem with Budget Constraint
Authors: Parham Azimi, Hamid Reza Charmchi
Abstract:
In this research, we have developed a new efficient heuristic algorithm for the dynamic facility layout problem with budget constraint (DFLPB). This heuristic algorithm combines two mathematical programming methods such as discrete event simulation and linear integer programming (IP) to obtain a near optimum solution. In the proposed algorithm, the non-linear model of the DFLP has been changed to a pure integer programming (PIP) model. Then, the optimal solution of the PIP model has been used in a simulation model that has been designed in a similar manner as the DFLP for determining the probability of assigning a facility to a location. After a sufficient number of runs, the simulation model obtains near optimum solutions. Finally, to verify the performance of the algorithm, several test problems have been solved. The results show that the proposed algorithm is more efficient in terms of speed and accuracy than other heuristic algorithms presented in previous works found in the literature.Keywords: Budget constraint, Dynamic facility layout problem, Integer programming, Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965369 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models
Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo de Magalhães
Abstract:
This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.
Keywords: Rainfall-runoff models, optimization procedure, automatic parameter calibration, hyperbolic smoothing method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408368 Spatial Data Mining by Decision Trees
Authors: S. Oujdi, H. Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.
Keywords: C4.5 Algorithm, Decision trees, S-CART, Spatial data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2986367 An Improved Dynamic Window Approach with Environment Awareness for Local Obstacle Avoidance of Mobile Robots
Authors: Baoshan Wei, Shuai Han, Xing Zhang
Abstract:
Local obstacle avoidance is critical for mobile robot navigation. It is a challenging task to ensure path optimality and safety in cluttered environments. We proposed an Environment Aware Dynamic Window Approach in this paper to cope with the issue. The method integrates environment characterization into Dynamic Window Approach (DWA). Two strategies are proposed in order to achieve the integration. The local goal strategy guides the robot to move through openings before approaching the final goal, which solves the local minima problem in DWA. The adaptive control strategy endows the robot to adjust its state according to the environment, which addresses path safety compared with DWA. Besides, the evaluation shows that the path generated from the proposed algorithm is safer and smoother compared with state-of-the-art algorithms.Keywords: Adaptive control, dynamic window approach, environment aware, local obstacle avoidance, mobile robots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294366 Inferential Reasoning for Heterogeneous Multi-Agent Mission
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
We describe issues bedeviling the coordination of heterogeneous (different sensors carrying agents) multi-agent missions such as belief conflict, situation reasoning, etc. We applied Bayesian and agents' presumptions inferential reasoning to solve the outlined issues with the heterogeneous multi-agent belief variation and situational-base reasoning. Bayesian Belief Network (BBN) was used in modeling the agents' belief conflict due to sensor variations. Simulation experiments were designed, and cases from agents’ missions were used in training the BBN using gradient descent and expectation-maximization algorithms. The output network is a well-trained BBN for making inferences for both agents and human experts. We claim that the Bayesian learning algorithm prediction capacity improves by the number of training data and argue that it enhances multi-agents robustness and solve agents’ sensor conflicts.Keywords: Distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 640365 Improving the Security of Internet of Things Using Encryption Algorithms
Authors: Amirhossein Safi
Abstract:
Internet of things (IOT) is a kind of advanced information technology which has drawn societies’ attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously, IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission, and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually, the suggested encryption algorithm has been simulated by MATLAB software, and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.
Keywords: Internet of things, security, hybrid algorithm, privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4197364 Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images
Authors: A. Biran, P. Sobhe Bidari, K. Raahemifar
Abstract:
Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.Keywords: Diabetic retinopathy, fundus, CHT, exudates, hemorrhages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642363 Improved Weighted Matching for Speaker Recognition
Authors: Ozan Mut, Mehmet Göktürk
Abstract:
Matching algorithms have significant importance in speaker recognition. Feature vectors of the unknown utterance are compared to feature vectors of the modeled speakers as a last step in speaker recognition. A similarity score is found for every model in the speaker database. Depending on the type of speaker recognition, these scores are used to determine the author of unknown speech samples. For speaker verification, similarity score is tested against a predefined threshold and either acceptance or rejection result is obtained. In the case of speaker identification, the result depends on whether the identification is open set or closed set. In closed set identification, the model that yields the best similarity score is accepted. In open set identification, the best score is tested against a threshold, so there is one more possible output satisfying the condition that the speaker is not one of the registered speakers in existing database. This paper focuses on closed set speaker identification using a modified version of a well known matching algorithm. The results of new matching algorithm indicated better performance on YOHO international speaker recognition database.Keywords: Automatic Speaker Recognition, Voice Recognition, Pattern Recognition, Digital Audio Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732362 Evolution of Fuzzy Neural Networks Using an Evolution Strategy with Fuzzy Genotype Values
Authors: Hidehiko Okada
Abstract:
Evolution strategy (ES) is a well-known instance of evolutionary algorithms, and there have been many studies on ES. In this paper, the author proposes an extended ES for solving fuzzy-valued optimization problems. In the proposed ES, genotype values are not real numbers but fuzzy numbers. Evolutionary processes in the ES are extended so that it can handle genotype instances with fuzzy numbers. In this study, the proposed method is experimentally applied to the evolution of neural networks with fuzzy weights and biases. Results reveal that fuzzy neural networks evolved using the proposed ES with fuzzy genotype values can model hidden target fuzzy functions even though no training data are explicitly provided. Next, the proposed method is evaluated in terms of variations in specifying fuzzy numbers as genotype values. One of the mostly adopted fuzzy numbers is a symmetric triangular one that can be specified by its lower and upper bounds (LU) or its center and width (CW). Experimental results revealed that the LU model contributed better to the fuzzy ES than the CW model, which indicates that the LU model should be adopted in future applications of the proposed method.
Keywords: Evolutionary algorithm, evolution strategy, fuzzy number, feedforward neural network, neuroevolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545361 Developing New Algorithm and Its Application on Optimal Control of Pumps in Water Distribution Network
Authors: R. Rajabpour, N. Talebbeydokhti, M. H. Ahmadi
Abstract:
In recent years, new techniques for solving complex problems in engineering are proposed. One of these techniques is JPSO algorithm. With innovative changes in the nature of the jump algorithm JPSO, it is possible to construct a graph-based solution with a new algorithm called G-JPSO. In this paper, a new algorithm to solve the optimal control problem Fletcher-Powell and optimal control of pumps in water distribution network was evaluated. Optimal control of pumps comprise of optimum timetable operation (status on and off) for each of the pumps at the desired time interval. Maximum number of status on and off for each pumps imposed to the objective function as another constraint. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The proposed algorithm results were compared well with the ant colony algorithm, genetic and JPSO results. This shows the robustness of proposed algorithm in finding near optimum solutions with reasonable computational cost.Keywords: G-JPSO, operation, optimization, pumping station, water distribution networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637360 Investigating Simple Multipath Compensation for Frequency Modulated Signals at Lower Frequencies
Authors: Lusungu Ndovi
Abstract:
Radio propagation from point-to-point is affected by the physical channel in many ways. A signal arriving at a destination travels through a number of different paths which are referred to as multi-paths. Research in this area of wireless communications has progressed well over the years with the research taking different angles of focus. By this is meant that some researchers focus on ways of reducing or eluding Multipath effects whilst others focus on ways of mitigating the effects of Multipath through compensation schemes. Baseband processing is seen as one field of signal processing that is cardinal to the advancement of software defined radio technology. This has led to wide research into the carrying out certain algorithms at baseband. This paper considers compensating for Multipath for Frequency Modulated signals. The compensation process is carried out at Radio frequency (RF) and at Quadrature baseband (QBB) and the results are compared. Simulations are carried out using MatLab so as to show the benefits of working at lower QBB frequencies than at RF.Keywords: Quadrature baseband, Radio frequency, MultipathCompensation, Frequency modulation, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650359 Agent-based Simulation for Blood Glucose Control in Diabetic Patients
Authors: Sh. Yasini, M. B. Naghibi-Sistani, A. Karimpour
Abstract:
This paper employs a new approach to regulate the blood glucose level of type I diabetic patient under an intensive insulin treatment. The closed-loop control scheme incorporates expert knowledge about treatment by using reinforcement learning theory to maintain the normoglycemic average of 80 mg/dl and the normal condition for free plasma insulin concentration in severe initial state. The insulin delivery rate is obtained off-line by using Qlearning algorithm, without requiring an explicit model of the environment dynamics. The implementation of the insulin delivery rate, therefore, requires simple function evaluation and minimal online computations. Controller performance is assessed in terms of its ability to reject the effect of meal disturbance and to overcome the variability in the glucose-insulin dynamics from patient to patient. Computer simulations are used to evaluate the effectiveness of the proposed technique and to show its superiority in controlling hyperglycemia over other existing algorithmsKeywords: Insulin Delivery rate, Q-learning algorithm, Reinforcement learning, Type I diabetes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198358 CPU Architecture Based on Static Hardware Scheduler Engine and Multiple Pipeline Registers
Authors: Ionel Zagan, Vasile Gheorghita Gaitan
Abstract:
The development of CPUs and of real-time systems based on them made it possible to use time at increasingly low resolutions. Together with the scheduling methods and algorithms, time organizing has been improved so as to respond positively to the need for optimization and to the way in which the CPU is used. This presentation contains both a detailed theoretical description and the results obtained from research on improving the performances of the nMPRA (Multi Pipeline Register Architecture) processor by implementing specific functions in hardware. The proposed CPU architecture has been developed, simulated and validated by using the FPGA Virtex-7 circuit, via a SoC project. Although the nMPRA processor hardware structure with five pipeline stages is very complex, the present paper presents and analyzes the tests dedicated to the implementation of the CPU and of the memory on-chip for instructions and data. In order to practically implement and test the entire SoC project, various tests have been performed. These tests have been performed in order to verify the drivers for peripherals and the boot module named Bootloader.
Keywords: Hardware scheduler, nMPRA processor, real-time systems, scheduling methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095357 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods
Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian
Abstract:
In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3280356 Dissecting Big Trajectory Data to Analyse Road Network Travel Efficiency
Authors: Rania Alshikhe, Vinita Jindal
Abstract:
Digital innovation has played a crucial role in managing smart transportation. For this, big trajectory data collected from trav-eling vehicles, such as taxis through installed global positioning sys-tem (GPS)-enabled devices can be utilized. It offers an unprecedented opportunity to trace the movements of vehicles in fine spatiotemporal granularity. This paper aims to explore big trajectory data to measure the travel efficiency of road networks using the proposed statistical travel efficiency measure (STEM) across an entire city. Further, it identifies the cause of low travel efficiency by proposed least square approximation network-based causality exploration (LANCE). Finally, the resulting data analysis reveals the causes of low travel efficiency, along with the road segments that need to be optimized to improve the traffic conditions and thus minimize the average travel time from given point A to point B in the road network. Obtained results show that our proposed approach outperforms the baseline algorithms for measuring the travel efficiency of the road network.
Keywords: GPS trajectory, road network, taxi trips, digital map, big data, STEM, LANCE
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510355 A Fast Silhouette Detection Algorithm for Shadow Volumes in Augmented Reality
Authors: Hoshang Kolivand, Mahyar Kolivand, Mohd Shahrizal Sunar, Mohd Azhar M. Arsad
Abstract:
Real-time shadow generation in virtual environments and Augmented Reality (AR) was always a hot topic in the last three decades. Lots of calculation for shadow generation among AR needs a fast algorithm to overcome this issue and to be capable of implementing in any real-time rendering. In this paper, a silhouette detection algorithm is presented to generate shadows for AR systems. Δ+ algorithm is presented based on extending edges of occluders to recognize which edges are silhouettes in the case of real-time rendering. An accurate comparison between the proposed algorithm and current algorithms in silhouette detection is done to show the reduction calculation by presented algorithm. The algorithm is tested in both virtual environments and AR systems. We think that this algorithm has the potential to be a fundamental algorithm for shadow generation in all complex environments.Keywords: Silhouette detection, shadow volumes, real-time shadows, rendering, augmented reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020354 On Mobile Checkpointing using Index and Time Together
Authors: Awadhesh Kumar Singh
Abstract:
Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.
Keywords: Checkpointing, forced checkpoint, mobile computing, recovery, time-coordinated.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488353 When Explanations “Cause“ Error: A Look at Representations and Compressions
Authors: Michael Lissack
Abstract:
We depend upon explanation in order to “make sense" out of our world. And, making sense is all the more important when dealing with change. But, what happens if our explanations are wrong? This question is examined with respect to two types of explanatory model. Models based on labels and categories we shall refer to as “representations." More complex models involving stories, multiple algorithms, rules of thumb, questions, ambiguity we shall refer to as “compressions." Both compressions and representations are reductions. But representations are far more reductive than compressions. Representations can be treated as a set of defined meanings – coherence with regard to a representation is the degree of fidelity between the item in question and the definition of the representation, of the label. By contrast, compressions contain enough degrees of freedom and ambiguity to allow us to make internal predictions so that we may determine our potential actions in the possibility space. Compressions are explanatory via mechanism. Representations are explanatory via category. Managers are often confusing their evocation of a representation (category inclusion) as the creation of a context of compression (description of mechanism). When this type of explanatory error occurs, more errors follow. In the drive for efficiency such substitutions are all too often proclaimed – at the manager-s peril..Keywords: Coherence, Emergence, Reduction, Model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1239352 Benchmarking of Pentesting Tools
Authors: Esteban Alejandro Armas Vega, Ana Lucila Sandoval Orozco, Luis Javier García Villalba
Abstract:
The benchmarking of tools for dynamic analysis of vulnerabilities in web applications is something that is done periodically, because these tools from time to time update their knowledge base and search algorithms, in order to improve their accuracy. Unfortunately, the vast majority of these evaluations are made by software enthusiasts who publish their results on blogs or on non-academic websites and always with the same evaluation methodology. Similarly, academics who have carried out this type of analysis from a scientific approach, the majority, make their analysis within the same methodology as well the empirical authors. This paper is based on the interest of finding answers to questions that many users of this type of tools have been asking over the years, such as, to know if the tool truly test and evaluate every vulnerability that it ensures do, or if the tool, really, deliver a real report of all the vulnerabilities tested and exploited. This kind of questions have also motivated previous work but without real answers. The aim of this paper is to show results that truly answer, at least on the tested tools, all those unanswered questions. All the results have been obtained by changing the common model of benchmarking used for all those previous works.Keywords: Cybersecurity, IDS, security, web scanners, web vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805351 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction
Authors: Kefaya Qaddoum
Abstract:
Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.
Keywords: Tomato yields prediction, naive Bayes, redundancy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5109350 Multichannel Scheme under Max-Min Fairness Environment for Cognitive Radio Networks
Authors: Hans R. Márquez, Cesar Hernández, Ingrid Páez
Abstract:
This paper develops a multiple channel assignment model, which allows to take advantage of spectrum opportunities in cognitive radio networks in the most efficient way. The developed scheme allows making several assignments of available and frequency adjacent channel, which require a bigger bandwidth, under an equality environment. The hybrid assignment model it is made by two algorithms, one that makes the ranking and selects available frequency channels and the other one in charge of establishing the Max-Min Fairness for not restrict the spectrum opportunities for all the other secondary users, who also claim to make transmissions. Measurements made were done for average bandwidth, average delay, as well as fairness computation for several channel assignments. Reached results were evaluated with experimental spectrum occupational data from captured GSM frequency band. The developed model shows evidence of improvement in spectrum opportunity use and a wider average transmission bandwidth for each secondary user, maintaining equality criteria in channel assignment.Keywords: Bandwidth, fairness, multichannel, secondary users.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763349 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
Authors: S. K. Pillai, M. K. Jeyakumar
Abstract:
Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041348 Optimal Design of Multimachine Power System Stabilizers Using Improved Multi-Objective Particle Swarm Optimization Algorithm
Authors: Badr M. Alshammari, T. Guesmi
Abstract:
In this paper, the concept of a non-dominated sorting multi-objective particle swarm optimization with local search (NSPSO-LS) is presented for the optimal design of multimachine power system stabilizers (PSSs). The controller design is formulated as an optimization problem in order to shift the system electromechanical modes in a pre-specified region in the s-plan. A composite set of objective functions comprising the damping factor and the damping ratio of the undamped and lightly damped electromechanical modes is considered. The performance of the proposed optimization algorithm is verified for the 3-machine 9-bus system. Simulation results based on eigenvalue analysis and nonlinear time-domain simulation show the potential and superiority of the NSPSO-LS algorithm in tuning PSSs over a wide range of loading conditions and large disturbance compared to the classic PSO technique and genetic algorithms.
Keywords: Multi-objective optimization, particle swarm optimization, power system stabilizer, low frequency oscillations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232347 Geo-Spatial Methods to Better Understand Urban Food Deserts
Authors: Brian Ceh, Alison Jackson-Holland
Abstract:
Food deserts are a reality in some cities. These deserts can be described as a shortage of healthy food options within close proximity of consumers. The shortage in this case is typically facilitated by a lack of stores in an urban area that provide adequate fruit and vegetable choices. This study explores new avenues to better understand food deserts by examining modes of transportation that are available to shoppers or consumers, e.g. walking, automobile, or public transit. Further, this study is unique in that it not only explores the location of large grocery stores, but small grocery and convenience stores too. In this study, the relationship between some socio-economic indicators, such as personal income, are also explored to determine any possible association with food deserts. In addition, to help facilitate our understanding of food deserts, complex network spatial models that are built on adequate algorithms are used to investigate the possibility of food deserts in the city of Hamilton, Canada. It is found that Hamilton, Canada is adequate serviced by retailers who provide healthy food choices and that the food desert phenomena is almost absent.Keywords: Canada, desert, food, Hamilton, stores.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298346 Fault Localization and Alarm Correlation in Optical WDM Networks
Authors: G. Ramesh, S. Sundara Vadivelu
Abstract:
For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.
Keywords: Alarm correlation, blocking probability, delay, fault localization, WDM networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068345 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: k-factor, GARMA, LLWNN, G-GARCH, electricity price, forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995344 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.
Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183343 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2751342 Centre Of Mass Selection Operator Based Meta-Heuristic For Unbounded Knapsack Problem
Authors: D.Venkatesan, K.Kannan, S. Raja Balachandar
Abstract:
In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Keywords: Genetic Algorithm, Unbounded Knapsack Problem, Combinatorial Optimization, Meta-Heuristic, Center of Mass
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699