Search results for: Expanded Invasive Weed Optimization algorithm (exIWO)
616 Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration
Authors: Junjie Gao, Guishi Deng
Abstract:
In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.Keywords: OWL ontology, Case-based Reasoning, FormalConcept Analysis, Knowledge Integration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011615 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems
Authors: Takashi Shimizu, Tomoaki Hashimoto
Abstract:
A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.Keywords: Model predictive control, unscented Kalman filter, nonlinear systems, implicit systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948614 SFE as a Superior Technique for Extraction of Eugenol-Rich Fraction from Cinnamomum tamala Nees (Bay Leaf) - Process Analysis and Phytochemical Characterization
Authors: Sudip Ghosh, Dipanwita Roy, Dipan Chatterjee, Paramita Bhattacharjee, Satadal Das
Abstract:
Highest yield of eugenol-rich fractions from Cinnamomum tamala (bay leaf) leaves were obtained by supercritical carbon dioxide (SC-CO2), compared to hydro-distillation, organic solvents, liquid CO2 and subcritical CO2 extractions. Optimization of SC-CO2 extraction parameters was carried out to obtain an extract with maximum eugenol content. This was achieved using a sample size of 10g at 55°C, 512 bar after 60min at a flow rate of 25.0 cm3/sof gaseous CO2. This extract has the best combination of phytochemical properties such as phenolic content (1.77mg gallic acid/g dry bay leaf), reducing power (0.80mg BHT/g dry bay leaf), antioxidant activity (IC50 of 0.20mg/ml) and anti-inflammatory potency (IC50 of 1.89mg/ml). Identification of compounds in this extract was performed by GC-MS analysis and its antimicrobial potency was also evaluated. The MIC values against E. coli, P. aeruginosa and S. aureus were 0.5, 0.25 and 0.5mg/ml, respectively.
Keywords: Antimicrobial potency, Cinnamomum tamala, eugenol, supercritical carbon dioxide extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3628613 A New Reliability Based Channel Allocation Model in Mobile Networks
Authors: Anujendra, Parag Kumar Guha Thakurta
Abstract:
The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. So, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.
Keywords: Base station, channel, GA, Pareto-optimal, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911612 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155611 LFC Design of a Deregulated Power System with TCPS Using PSO
Authors: H. Shayeghi, H.A. Shayanfar, A. Jalili
Abstract:
In the LFC problem, the interconnections among some areas are the input of disturbances, and therefore, it is important to suppress the disturbances by the coordination of governor systems. In contrast, tie-line power flow control by TCPS located between two areas makes it possible to stabilize the system frequency oscillations positively through interconnection, which is also expected to provide a new ancillary service for the further power systems. Thus, a control strategy using controlling the phase angle of TCPS is proposed for provide active control facility of system frequency in this paper. Also, the optimum adjustment of PID controller's parameters in a robust way under bilateral contracted scenario following the large step load demands and disturbances with and without TCPS are investigated by Particle Swarm Optimization (PSO), that has a strong ability to find the most optimistic results. This newly developed control strategy combines the advantage of PSO and TCPS and has simple stricture that is easy to implement and tune. To demonstrate the effectiveness of the proposed control strategy a three-area restructured power system is considered as a test system under different operating conditions and system nonlinearities. Analysis reveals that the TCPS is quite capable of suppressing the frequency and tie-line power oscillations effectively as compared to that obtained without TCPS for a wide range of plant parameter changes, area load demands and disturbances even in the presence of system nonlinearities.
Keywords: LFC, TCPS, Dregulated Power System, PowerSystem Control, PSO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2069610 Comparison of Different Neural Network Approaches for the Prediction of Kidney Dysfunction
Authors: Ali Hussian Ali AlTimemy, Fawzi M. Al Naima
Abstract:
This paper presents the prediction of kidney dysfunction using different neural network (NN) approaches. Self organization Maps (SOM), Probabilistic Neural Network (PNN) and Multi Layer Perceptron Neural Network (MLPNN) trained with Back Propagation Algorithm (BPA) are used in this study. Six hundred and sixty three sets of analytical laboratory tests have been collected from one of the private clinical laboratories in Baghdad. For each subject, Serum urea and Serum creatinin levels have been analyzed and tested by using clinical laboratory measurements. The collected urea and cretinine levels are then used as inputs to the three NN models in which the training process is done by different neural approaches. SOM which is a class of unsupervised network whereas PNN and BPNN are considered as class of supervised networks. These networks are used as a classifier to predict whether kidney is normal or it will have a dysfunction. The accuracy of prediction, sensitivity and specificity were found for each type of the proposed networks .We conclude that PNN gives faster and more accurate prediction of kidney dysfunction and it works as promising tool for predicting of routine kidney dysfunction from the clinical laboratory data.Keywords: Kidney Dysfunction, Prediction, SOM, PNN, BPNN, Urea and Creatinine levels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931609 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2751608 Design and Implementation of Reed Solomon Encoder on FPGA
Authors: Amandeep Singh, Mandeep Kaur
Abstract:
Error correcting codes are used for detection and correction of errors in digital communication system. Error correcting coding is based on appending of redundancy to the information message according to a prescribed algorithm. Reed Solomon codes are part of channel coding and withstand the effect of noise, interference and fading. Galois field arithmetic is used for encoding and decoding reed Solomon codes. Galois field multipliers and linear feedback shift registers are used for encoding the information data block. The design of Reed Solomon encoder is complex because of use of LFSR and Galois field arithmetic. The purpose of this paper is to design and implement Reed Solomon (255, 239) encoder with optimized and lesser number of Galois Field multipliers. Symmetric generator polynomial is used to reduce the number of GF multipliers. To increase the capability toward error correction, convolution interleaving will be used with RS encoder. The Design will be implemented on Xilinx FPGA Spartan II.
Keywords: Galois Field, Generator polynomial, LFSR, Reed Solomon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4844607 Application of Novel Conserving Immersed Boundary Method to Moving Boundary Problem
Authors: S. N. Hosseini, S. M. H. Karimian
Abstract:
A new conserving approach in the context of Immersed Boundary Method (IBM) is presented to simulate one dimensional, incompressible flow in a moving boundary problem. The method employs control volume scheme to simulate the flow field. The concept of ghost node is used at the boundaries to conserve the mass and momentum equations. The Present method implements the conservation laws in all cells including boundary control volumes. Application of the method is studied in a test case with moving boundary. Comparison between the results of this new method and a sharp interface (Image Point Method) IBM algorithm shows a well distinguished improvement in both pressure and velocity fields of the present method. Fluctuations in pressure field are fully resolved in this proposed method. This approach expands the IBM capability to simulate flow field for variety of problems by implementing conservation laws in a fully Cartesian grid compared to other conserving methods.
Keywords: Immersed Boundary Method, conservation of mass and momentum laws, moving boundary, boundary condition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990606 The Influence of Beta Shape Parameters in Project Planning
Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou
Abstract:
Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.
Keywords: Beta distribution, PERT, Monte Carlo Simulation, skewness, project completion time distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770605 Numerical Investigation of the Evaporation and Mixing of UWS in a Diesel Exhaust Pipe
Authors: Tae Hyun Ahn, Gyo Woo Lee, Man Young Kim
Abstract:
Because of high thermal efficiency and low CO2 emission, diesel engines are being used widely in many industrial fields although it makes many PM and NOx which give both human health and environment a negative effect. NOx regulations for diesel engines, however, are being strengthened and it is impossible to meet the emission standard without NOx reduction devices such as SCR (Selective Catalytic Reduction), LNC (Lean NOx Catalyst), and LNT (Lean NOx Trap). Among the NOx reduction devices, urea-SCR system is known as the most stable and efficient method to solve the problem of NOx emission. But this device has some issues associated with the ammonia slip phenomenon which is occurred by shortage of evaporation and thermolysis time, and that makes it difficult to achieve uniform distribution of the injected urea in front of monolith. Therefore, this study has focused on the mixing enhancement between urea and exhaust gases to enhance the efficiency of the SCR catalyst equipped in catalytic muffler by changing inlet gas temperature and spray conditions to improve the spray uniformity of the urea water solution. Finally, it can be found that various parameters such as inlet gas temperature and injector and injection angles significantly affect the evaporation and mixing of the urea water solution with exhaust gases, and therefore, optimization of these parameters are required.
Keywords: Evaporation, Injection, Selective Catalytic Reduction (SCR), Thermolysis, UWS (Urea-Water-Solution).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2890604 Evaluation of the ANN Based Nonlinear System Models in the MSE and CRLB Senses
Authors: M.V Rajesh, Archana R, A Unnikrishnan, R Gopikakumari, Jeevamma Jacob
Abstract:
The System Identification problem looks for a suitably parameterized model, representing a given process. The parameters of the model are adjusted to optimize a performance function based on error between the given process output and identified process output. The linear system identification field is well established with many classical approaches whereas most of those methods cannot be applied for nonlinear systems. The problem becomes tougher if the system is completely unknown with only the output time series is available. It has been reported that the capability of Artificial Neural Network to approximate all linear and nonlinear input-output maps makes it predominantly suitable for the identification of nonlinear systems, where only the output time series is available. [1][2][4][5]. The work reported here is an attempt to implement few of the well known algorithms in the context of modeling of nonlinear systems, and to make a performance comparison to establish the relative merits and demerits.Keywords: Multilayer neural networks, Radial Basis Functions, Clustering algorithm, Back Propagation training, Extended Kalmanfiltering, Mean Square Error, Nonlinear Modeling, Cramer RaoLower Bound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646603 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents
Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid
Abstract:
Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525602 Information Gain Ratio Based Clustering for Investigation of Environmental Parameters Effects on Human Mental Performance
Authors: H. Mehdi, Kh. S. Karimov, A. A. Kavokin
Abstract:
Methods of clustering which were developed in the data mining theory can be successfully applied to the investigation of different kinds of dependencies between the conditions of environment and human activities. It is known, that environmental parameters such as temperature, relative humidity, atmospheric pressure and illumination have significant effects on the human mental performance. To investigate these parameters effect, data mining technique of clustering using entropy and Information Gain Ratio (IGR) K(Y/X) = (H(X)–H(Y/X))/H(Y) is used, where H(Y)=-ΣPi ln(Pi). This technique allows adjusting the boundaries of clusters. It is shown that the information gain ratio (IGR) grows monotonically and simultaneously with degree of connectivity between two variables. This approach has some preferences if compared, for example, with correlation analysis due to relatively smaller sensitivity to shape of functional dependencies. Variant of an algorithm to implement the proposed method with some analysis of above problem of environmental effects is also presented. It was shown that proposed method converges with finite number of steps.Keywords: Clustering, Correlation analysis, EnvironmentalParameters, Information Gain Ratio, Mental Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824601 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes
Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq
Abstract:
In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Keywords: iterative decoder, LDPC, region-graph, chaos.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648600 BasWilCalc – Basket Willow (Salix viminalis) Biomass Yield Calculator
Authors: Wiesław Szulczewski, Wojciech Jakubowski, Andrzej Żyromski, Małgorzata Biniak-Pieróg
Abstract:
The aim of the paper was to elaborate a novel calculator BasWilCalc, that allows to estimate the actual amount of biomass on the basket willow plantations. The proposed method is based on the results of field experiment conducted during years 2011-2013 on basket willow plantation in the south-western part of Poland. As input data the results of destructive measurements of the diameter, length and weight of willow stems and non-destructive biometric measurements of diameter in the middle of stems and their length during the growing season performed at weekly intervals were used. Performed analysis enabled to develop the algorithm which, due to the fact that energy plantations are of known and constant planting structure, allows to estimate the actual amount of willow basket biomass on the plantation with a given probability and accuracy specified by the model, based on the number of stems measured and the age of the plantation.
Keywords: Basket willow (Salix viminalis) biomass, biometric measurements, yield, biomass calculator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667599 Adaptive Network Intrusion Detection Learning: Attribute Selection and Classification
Authors: Dewan Md. Farid, Jerome Darmont, Nouria Harbi, Nguyen Huu Hoa, Mohammad Zahidur Rahman
Abstract:
In this paper, a new learning approach for network intrusion detection using naïve Bayesian classifier and ID3 algorithm is presented, which identifies effective attributes from the training dataset, calculates the conditional probabilities for the best attribute values, and then correctly classifies all the examples of training and testing dataset. Most of the current intrusion detection datasets are dynamic, complex and contain large number of attributes. Some of the attributes may be redundant or contribute little for detection making. It has been successfully tested that significant attribute selection is important to design a real world intrusion detection systems (IDS). The purpose of this study is to identify effective attributes from the training dataset to build a classifier for network intrusion detection using data mining algorithms. The experimental results on KDD99 benchmark intrusion detection dataset demonstrate that this new approach achieves high classification rates and reduce false positives using limited computational resources.Keywords: Attributes selection, Conditional probabilities, information gain, network intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2698598 Renovation Planning Model for a Shopping Mall
Authors: Hsin-Yun Lee
Abstract:
In this study, the pedestrian simulation VISWALK integration and application platform ant algorithms written program made to construct a renovation engineering schedule planning mode. The use of simulation analysis platform construction site when the user running the simulation, after calculating the user walks in the case of construction delays, the ant algorithm to find out the minimum delay time schedule plan, and add volume and unit area deactivated loss of business computing, and finally to the owners and users of two different positions cut considerations pick out the best schedule planning. To assess and validate its effectiveness, this study constructed the model imported floor of a shopping mall floor renovation engineering cases. Verify that the case can be found from the mode of the proposed project schedule planning program can effectively reduce the delay time and the user's walking mall loss of business, the impact of the operation on the renovation engineering facilities in the building to a minimum.Keywords: Pedestrian, renovation, schedule, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2331597 Spatial-Temporal Awareness Approach for Extensive Re-Identification
Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush
Abstract:
Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.
Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532596 Security Architecture for At-Home Medical Care Using Sensor Network
Authors: S.S.Mohanavalli, Sheila Anand
Abstract:
This paper proposes a novel architecture for At- Home medical care which enables senior citizens, patients with chronic ailments and patients requiring post- operative care to be remotely monitored in the comfort of their homes. This architecture is implemented using sensors and wireless networking for transmitting patient data to the hospitals, health- care centers for monitoring by medical professionals. Patients are equipped with sensors to measure their physiological parameters, like blood pressure, pulse rate etc. and a Wearable Data Acquisition Unit is used to transmit the patient sensor data. Medical professionals can be alerted to any abnormal variations in these values for diagnosis and suitable treatment. Security threats and challenges inherent to wireless communication and sensor network have been discussed and a security mechanism to ensure data confidentiality and source authentication has been proposed. Symmetric key algorithm AES has been used for encrypting the data and a patent-free, two-pass block cipher mode CCFB has been used for implementing semantic security.Keywords: data confidentiality, integrity, remotemonitoring, source authentication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742595 Low-Latency and Low-Overhead Path Planning for In-band Network-Wide Telemetry
Authors: Penghui Zhang, Hua Zhang, Jun-Bo Wang, Cheng Zeng, Zijian Cao
Abstract:
With the development of software-defined networks and programmable data planes, in-band network telemetry (INT) has become an emerging technology in communications because it can get accurate and real-time network information. However, due to the expansion of the network scale, existing telemetry systems, to the best of the authors’ knowledge, have difficulty in meeting the common requirements of low overhead, low latency and full coverage for traffic measurement. This paper proposes a network-wide telemetry system with a low-latency low-overhead path planning (INT-LLPP). This paper builds a mathematical model to analyze the telemetry overhead and latency of INT systems. Then, we adopt a greedy-based path planning algorithm to reduce the overhead and latency of the network telemetry with the full network coverage. The simulation results show that network-wide telemetry is achieved and the telemetry overhead can be reduced significantly compared with existing INT systems. INT-LLPP can control the system latency to get real-time network information.
Keywords: Network telemetry, network monitoring, path planning, low latency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 254594 Genetic Folding: Analyzing the Mercer-s Kernels Effect in Support Vector Machine using Genetic Folding
Authors: Mohd A. Mezher, Maysam F. Abbod
Abstract:
Genetic Folding (GF) a new class of EA named as is introduced for the first time. It is based on chromosomes composed of floating genes structurally organized in a parent form and separated by dots. Although, the genotype/phenotype system of GF generates a kernel expression, which is the objective function of superior classifier. In this work the question of the satisfying mapping-s rules in evolving populations is addressed by analyzing populations undergoing either Mercer-s or none Mercer-s rule. The results presented here show that populations undergoing Mercer-s rules improve practically models selection of Support Vector Machine (SVM). The experiment is trained multi-classification problem and tested on nonlinear Ionosphere dataset. The target of this paper is to answer the question of evolving Mercer-s rule in SVM addressed using either genetic folding satisfied kernel-s rules or not applied to complicated domains and problems.Keywords: Genetic Folding, GF, Evolutionary Algorithms, Support Vector Machine, Genetic Algorithm, Genetic Programming, Multi-Classification, Mercer's Rules
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627593 Accuracy of Displacement Estimation and Selection of Capacitors for a Four Degrees of Freedom Capacitive Force Sensor
Authors: Chisato Murakami, Makoto Takahashi
Abstract:
Force sensor has been used as requisite for knowing information on the amount and the directions of forces on the skin surface. We have developed a four-degrees-of-freedom capacitive force sensor (approximately 20×20×5 mm3) that has a flexible structure and sixteen parallel plate capacitors. An iterative algorithm was developed for estimating four displacements from the sixteen capacitances using fourth-order polynomial approximation of characteristics between capacitance and displacement. The estimation results from measured capacitances had large error caused by deterioration of the characteristics. In this study, effective capacitors had major information were selected on the basis of the capacitance change range and the characteristic shape. Maximum errors in calibration and non-calibration points were 25%and 6.8%.However the maximum error was larger than desired value, the smallness of averaged value indicated the occurrence of a few large error points. On the other hand, error in non-calibration point was within desired value.
Keywords: Force sensors, capacitive sensors, estimation, iterative algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1616592 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography
Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway
Abstract:
This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.
Keywords: Steganography, stego, LSB, crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551591 A Bi-Objective Model for Location-Allocation Problem within Queuing Framework
Authors: Amirhossein Chambari, Seyed Habib Rahmaty, Vahid Hajipour, Aida Karimi
Abstract:
This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.Keywords: Queuing, Location, Bi-objective, NSGA-II, NRGA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276590 Journey on Image Clustering Based on Color Composition
Authors: Achmad Nizar Hidayanto, Elisabeth Martha Koeanan
Abstract:
Image clustering is a process of grouping images based on their similarity. The image clustering usually uses the color component, texture, edge, shape, or mixture of two components, etc. This research aims to explore image clustering using color composition. In order to complete this image clustering, three main components should be considered, which are color space, image representation (feature extraction), and clustering method itself. We aim to explore which composition of these factors will produce the best clustering results by combining various techniques from the three components. The color spaces use RGB, HSV, and L*a*b* method. The image representations use Histogram and Gaussian Mixture Model (GMM), whereas the clustering methods use KMeans and Agglomerative Hierarchical Clustering algorithm. The results of the experiment show that GMM representation is better combined with RGB and L*a*b* color space, whereas Histogram is better combined with HSV. The experiments also show that K-Means is better than Agglomerative Hierarchical for images clustering.Keywords: Image clustering, feature extraction, RGB, HSV, L*a*b*, Gaussian Mixture Model (GMM), histogram, Agglomerative Hierarchical Clustering (AHC), K-Means, Expectation-Maximization (EM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206589 DFIG-Based Wind Turbine with Shunt Active Power Filter Controlled by Double Nonlinear Predictive Controller
Authors: Abderrahmane El Kachani, El Mahjoub Chakir, Anass Ait Laachir, Abdelhamid Niaaniaa, Jamal Zerouaoui, Tarik Jarou
Abstract:
This paper presents a wind turbine based on the doubly fed induction generator (DFIG) connected to the utility grid through a shunt active power filter (SAPF). The whole system is controlled by a double nonlinear predictive controller (DNPC). A Taylor series expansion is used to predict the outputs of the system. The control law is calculated by optimization of the cost function. The first nonlinear predictive controller (NPC) is designed to ensure the high performance tracking of the rotor speed and regulate the rotor current of the DFIG, while the second one is designed to control the SAPF in order to compensate the harmonic produces by the three-phase diode bridge supplied by a passive circuit (rd, Ld). As a result, we obtain sinusoidal waveforms of the stator voltage and stator current. The proposed nonlinear predictive controllers (NPCs) are validated via simulation on a 1.5 MW DFIG-based wind turbine connected to an SAPF. The results obtained appear to be satisfactory and promising.
Keywords: Wind power, doubly fed induction generator, shunt active power filter, double nonlinear predictive controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918588 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca
Abstract:
This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.Keywords: Transshipment model, mixed integer programming, saving algorithm, dry freight transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913587 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Keywords: Artificial Intelligence, machine learning, deep learning, convolutional neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257