Search results for: Hash algorithm.
564 An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks
Authors: A. Allirani, M. Suganthi
Abstract:
Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.Keywords: Sensor networks, Low latency, Energy sorting protocol, data processing, Cluster formation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2740563 Depth Controls of an Autonomous Underwater Vehicle by Neurocontrollers for Enhanced Situational Awareness
Authors: Igor Astrov, Andrus Pedai
Abstract:
This paper focuses on a critical component of the situational awareness (SA), the neural control of autonomous constant depth flight of an autonomous underwater vehicle (AUV). Autonomous constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. The fundamental requirement for constant depth flight is the knowledge of the depth, and a properly designed controller to govern the process. The AUV, named VORAM, is used as a model for the verification of the proposed hybrid control algorithm. Three neural network controllers, named NARMA-L2 controllers, are designed for fast and stable diving maneuvers of chosen AUV model. This hybrid control strategy for chosen AUV model has been verified by simulation of diving maneuvers using software package Simulink and demonstrated good performance for fast SA in real-time searchand- rescue operations.
Keywords: Autonomous underwater vehicles, depth control, neurocontrollers, situational awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869562 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho
Abstract:
Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548561 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction
Authors: Kefaya Qaddoum
Abstract:
Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.
Keywords: Tomato yields prediction, naive Bayes, redundancy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5108560 Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading
Authors: Harshit Mehrotra, Gaurav Agrawal, M.C. Srivastava
Abstract:
Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.Keywords: Contour Velocity Model, Lip Contour Tracking, LipReading, Visual Character Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400559 A New Hybrid RMN Image Segmentation Algorithm
Authors: Abdelouahab Moussaoui, Nabila Ferahta, Victor Chen
Abstract:
The development of aid's systems for the medical diagnosis is not easy thing because of presence of inhomogeneities in the MRI, the variability of the data from a sequence to the other as well as of other different source distortions that accentuate this difficulty. A new automatic, contextual, adaptive and robust segmentation procedure by MRI brain tissue classification is described in this article. A first phase consists in estimating the density of probability of the data by the Parzen-Rozenblatt method. The classification procedure is completely automatic and doesn't make any assumptions nor on the clusters number nor on the prototypes of these clusters since these last are detected in an automatic manner by an operator of mathematical morphology called skeleton by influence zones detection (SKIZ). The problem of initialization of the prototypes as well as their number is transformed in an optimization problem; in more the procedure is adaptive since it takes in consideration the contextual information presents in every voxel by an adaptive and robust non parametric model by the Markov fields (MF). The number of bad classifications is reduced by the use of the criteria of MPM minimization (Maximum Posterior Marginal).Keywords: Clustering, Automatic Classification, SKIZ, MarkovFields, Image segmentation, Maximum Posterior Marginal (MPM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411558 Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model
Authors: Gururaj S Punekar, N K Kishore Senior, H S Y Shastry
Abstract:
Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).
Keywords: Assignment factor, Charge Simulation Method, High Voltage, Numerical field computation, Non uniformity factor, Simulation errors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050557 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: Classification, classifier fusion, CNN, Deep Learning, prediction, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719556 OCR for Script Identification of Hindi (Devnagari) Numerals using Feature Sub Selection by Means of End-Point with Neuro-Memetic Model
Authors: Banashree N. P., R. Vasanta
Abstract:
Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuro-memetic model [18] that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuro-memetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.Keywords: OCR, Global Feature, End-Points, Neuro-Memetic model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758555 Voice Disorders Identification Using Hybrid Approach: Wavelet Analysis and Multilayer Neural Networks
Authors: L. Salhi, M. Talbi, A. Cherif
Abstract:
This paper presents a new strategy of identification and classification of pathological voices using the hybrid method based on wavelet transform and neural networks. After speech acquisition from a patient, the speech signal is analysed in order to extract the acoustic parameters such as the pitch, the formants, Jitter, and shimmer. Obtained results will be compared to those normal and standard values thanks to a programmable database. Sounds are collected from normal people and patients, and then classified into two different categories. Speech data base is consists of several pathological and normal voices collected from the national hospital “Rabta-Tunis". Speech processing algorithm is conducted in a supervised mode for discrimination of normal and pathology voices and then for classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.Keywords: Formants, Neural Networks, Pathological Voices, Pitch, Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2841554 Complexity Analysis of Some Known Graph Coloring Instances
Authors: Jeffrey L. Duffany
Abstract:
Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.Keywords: graph coloring, complexity, algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400553 Optimization of Machining Parametric Study on Electrical Discharge Machining
Authors: Rakesh Prajapati, Purvik Patel, Hardik Patel
Abstract:
Productivity and quality are two important aspects that have become great concerns in today’s competitive global market. Every production/manufacturing unit mainly focuses on these areas in relation to the process, as well as the product developed. The electrical discharge machining (EDM) process, even now it is an experience process, wherein the selected parameters are still often far from the maximum, and at the same time selecting optimization parameters is costly and time consuming. Material Removal Rate (MRR) during the process has been considered as a productivity estimate with the aim to maximize it, with an intention of minimizing surface roughness taken as most important output parameter. These two opposites in nature requirements have been simultaneously satisfied by selecting an optimal process environment (optimal parameter setting). Objective function is obtained by Regression Analysis and Analysis of Variance. Then objective function is optimized using Genetic Algorithm technique. The model is shown to be effective; MRR and Surface Roughness improved using optimized machining parameters.
Keywords: Material removal rate, TWR, OC, DOE, ANOVA, MINITAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832552 Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration
Authors: Junjie Gao, Guishi Deng
Abstract:
In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.Keywords: OWL ontology, Case-based Reasoning, FormalConcept Analysis, Knowledge Integration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009551 A New Reliability Based Channel Allocation Model in Mobile Networks
Authors: Anujendra, Parag Kumar Guha Thakurta
Abstract:
The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. So, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.
Keywords: Base station, channel, GA, Pareto-optimal, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910550 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.
Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180549 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1154548 Comparison of Different Neural Network Approaches for the Prediction of Kidney Dysfunction
Authors: Ali Hussian Ali AlTimemy, Fawzi M. Al Naima
Abstract:
This paper presents the prediction of kidney dysfunction using different neural network (NN) approaches. Self organization Maps (SOM), Probabilistic Neural Network (PNN) and Multi Layer Perceptron Neural Network (MLPNN) trained with Back Propagation Algorithm (BPA) are used in this study. Six hundred and sixty three sets of analytical laboratory tests have been collected from one of the private clinical laboratories in Baghdad. For each subject, Serum urea and Serum creatinin levels have been analyzed and tested by using clinical laboratory measurements. The collected urea and cretinine levels are then used as inputs to the three NN models in which the training process is done by different neural approaches. SOM which is a class of unsupervised network whereas PNN and BPNN are considered as class of supervised networks. These networks are used as a classifier to predict whether kidney is normal or it will have a dysfunction. The accuracy of prediction, sensitivity and specificity were found for each type of the proposed networks .We conclude that PNN gives faster and more accurate prediction of kidney dysfunction and it works as promising tool for predicting of routine kidney dysfunction from the clinical laboratory data.Keywords: Kidney Dysfunction, Prediction, SOM, PNN, BPNN, Urea and Creatinine levels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930547 Robust Camera Calibration using Discrete Optimization
Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck
Abstract:
Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813546 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750545 Design and Implementation of Reed Solomon Encoder on FPGA
Authors: Amandeep Singh, Mandeep Kaur
Abstract:
Error correcting codes are used for detection and correction of errors in digital communication system. Error correcting coding is based on appending of redundancy to the information message according to a prescribed algorithm. Reed Solomon codes are part of channel coding and withstand the effect of noise, interference and fading. Galois field arithmetic is used for encoding and decoding reed Solomon codes. Galois field multipliers and linear feedback shift registers are used for encoding the information data block. The design of Reed Solomon encoder is complex because of use of LFSR and Galois field arithmetic. The purpose of this paper is to design and implement Reed Solomon (255, 239) encoder with optimized and lesser number of Galois Field multipliers. Symmetric generator polynomial is used to reduce the number of GF multipliers. To increase the capability toward error correction, convolution interleaving will be used with RS encoder. The Design will be implemented on Xilinx FPGA Spartan II.
Keywords: Galois Field, Generator polynomial, LFSR, Reed Solomon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4843544 Application of Novel Conserving Immersed Boundary Method to Moving Boundary Problem
Authors: S. N. Hosseini, S. M. H. Karimian
Abstract:
A new conserving approach in the context of Immersed Boundary Method (IBM) is presented to simulate one dimensional, incompressible flow in a moving boundary problem. The method employs control volume scheme to simulate the flow field. The concept of ghost node is used at the boundaries to conserve the mass and momentum equations. The Present method implements the conservation laws in all cells including boundary control volumes. Application of the method is studied in a test case with moving boundary. Comparison between the results of this new method and a sharp interface (Image Point Method) IBM algorithm shows a well distinguished improvement in both pressure and velocity fields of the present method. Fluctuations in pressure field are fully resolved in this proposed method. This approach expands the IBM capability to simulate flow field for variety of problems by implementing conservation laws in a fully Cartesian grid compared to other conserving methods.
Keywords: Immersed Boundary Method, conservation of mass and momentum laws, moving boundary, boundary condition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988543 The Influence of Beta Shape Parameters in Project Planning
Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou
Abstract:
Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.
Keywords: Beta distribution, PERT, Monte Carlo Simulation, skewness, project completion time distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769542 Evaluation of the ANN Based Nonlinear System Models in the MSE and CRLB Senses
Authors: M.V Rajesh, Archana R, A Unnikrishnan, R Gopikakumari, Jeevamma Jacob
Abstract:
The System Identification problem looks for a suitably parameterized model, representing a given process. The parameters of the model are adjusted to optimize a performance function based on error between the given process output and identified process output. The linear system identification field is well established with many classical approaches whereas most of those methods cannot be applied for nonlinear systems. The problem becomes tougher if the system is completely unknown with only the output time series is available. It has been reported that the capability of Artificial Neural Network to approximate all linear and nonlinear input-output maps makes it predominantly suitable for the identification of nonlinear systems, where only the output time series is available. [1][2][4][5]. The work reported here is an attempt to implement few of the well known algorithms in the context of modeling of nonlinear systems, and to make a performance comparison to establish the relative merits and demerits.Keywords: Multilayer neural networks, Radial Basis Functions, Clustering algorithm, Back Propagation training, Extended Kalmanfiltering, Mean Square Error, Nonlinear Modeling, Cramer RaoLower Bound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646541 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents
Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid
Abstract:
Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524540 Information Gain Ratio Based Clustering for Investigation of Environmental Parameters Effects on Human Mental Performance
Authors: H. Mehdi, Kh. S. Karimov, A. A. Kavokin
Abstract:
Methods of clustering which were developed in the data mining theory can be successfully applied to the investigation of different kinds of dependencies between the conditions of environment and human activities. It is known, that environmental parameters such as temperature, relative humidity, atmospheric pressure and illumination have significant effects on the human mental performance. To investigate these parameters effect, data mining technique of clustering using entropy and Information Gain Ratio (IGR) K(Y/X) = (H(X)–H(Y/X))/H(Y) is used, where H(Y)=-ΣPi ln(Pi). This technique allows adjusting the boundaries of clusters. It is shown that the information gain ratio (IGR) grows monotonically and simultaneously with degree of connectivity between two variables. This approach has some preferences if compared, for example, with correlation analysis due to relatively smaller sensitivity to shape of functional dependencies. Variant of an algorithm to implement the proposed method with some analysis of above problem of environmental effects is also presented. It was shown that proposed method converges with finite number of steps.Keywords: Clustering, Correlation analysis, EnvironmentalParameters, Information Gain Ratio, Mental Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823539 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes
Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq
Abstract:
In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Keywords: iterative decoder, LDPC, region-graph, chaos.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647538 BasWilCalc – Basket Willow (Salix viminalis) Biomass Yield Calculator
Authors: Wiesław Szulczewski, Wojciech Jakubowski, Andrzej Żyromski, Małgorzata Biniak-Pieróg
Abstract:
The aim of the paper was to elaborate a novel calculator BasWilCalc, that allows to estimate the actual amount of biomass on the basket willow plantations. The proposed method is based on the results of field experiment conducted during years 2011-2013 on basket willow plantation in the south-western part of Poland. As input data the results of destructive measurements of the diameter, length and weight of willow stems and non-destructive biometric measurements of diameter in the middle of stems and their length during the growing season performed at weekly intervals were used. Performed analysis enabled to develop the algorithm which, due to the fact that energy plantations are of known and constant planting structure, allows to estimate the actual amount of willow basket biomass on the plantation with a given probability and accuracy specified by the model, based on the number of stems measured and the age of the plantation.
Keywords: Basket willow (Salix viminalis) biomass, biometric measurements, yield, biomass calculator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666537 Adaptive Network Intrusion Detection Learning: Attribute Selection and Classification
Authors: Dewan Md. Farid, Jerome Darmont, Nouria Harbi, Nguyen Huu Hoa, Mohammad Zahidur Rahman
Abstract:
In this paper, a new learning approach for network intrusion detection using naïve Bayesian classifier and ID3 algorithm is presented, which identifies effective attributes from the training dataset, calculates the conditional probabilities for the best attribute values, and then correctly classifies all the examples of training and testing dataset. Most of the current intrusion detection datasets are dynamic, complex and contain large number of attributes. Some of the attributes may be redundant or contribute little for detection making. It has been successfully tested that significant attribute selection is important to design a real world intrusion detection systems (IDS). The purpose of this study is to identify effective attributes from the training dataset to build a classifier for network intrusion detection using data mining algorithms. The experimental results on KDD99 benchmark intrusion detection dataset demonstrate that this new approach achieves high classification rates and reduce false positives using limited computational resources.Keywords: Attributes selection, Conditional probabilities, information gain, network intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2695536 Renovation Planning Model for a Shopping Mall
Authors: Hsin-Yun Lee
Abstract:
In this study, the pedestrian simulation VISWALK integration and application platform ant algorithms written program made to construct a renovation engineering schedule planning mode. The use of simulation analysis platform construction site when the user running the simulation, after calculating the user walks in the case of construction delays, the ant algorithm to find out the minimum delay time schedule plan, and add volume and unit area deactivated loss of business computing, and finally to the owners and users of two different positions cut considerations pick out the best schedule planning. To assess and validate its effectiveness, this study constructed the model imported floor of a shopping mall floor renovation engineering cases. Verify that the case can be found from the mode of the proposed project schedule planning program can effectively reduce the delay time and the user's walking mall loss of business, the impact of the operation on the renovation engineering facilities in the building to a minimum.Keywords: Pedestrian, renovation, schedule, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327535 Spatial-Temporal Awareness Approach for Extensive Re-Identification
Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush
Abstract:
Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.
Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531