Search results for: Gustafson-Kessel algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3600

Search results for: Gustafson-Kessel algorithm

1230 Intelligent Software Architecture and Automatic Re-Architecting Based on Machine Learning

Authors: Gebremeskel Hagos Gebremedhin, Feng Chong, Heyan Huang

Abstract:

Software system is the combination of architecture and organized components to accomplish a specific function or set of functions. A good software architecture facilitates application system development, promotes achievement of functional requirements, and supports system reconfiguration. We describe three studies demonstrating the utility of our architecture in the subdomain of mobile office robots and identify software engineering principles embodied in the architecture. The main aim of this paper is to analyze prove architecture design and automatic re-architecting using machine learning. Intelligence software architecture and automatic re-architecting process is reorganizing in to more suitable one of the software organizational structure system using the user access dataset for creating relationship among the components of the system. The 3-step approach of data mining was used to analyze effective recovery, transformation and implantation with the use of clustering algorithm. Therefore, automatic re-architecting without changing the source code is possible to solve the software complexity problem and system software reuse.

Keywords: intelligence, software architecture, re-architecting, software reuse, High level design

Procedia PDF Downloads 120
1229 Using the Timepix Detector at CERN Accelerator Facilities

Authors: Andrii Natochii

Abstract:

The UA9 collaboration in the last two years has installed two different types of detectors to investigate the channeling effect in the bent silicon crystals with high-energy particles beam on the CERN accelerator facilities: Cherenkov detector CpFM and silicon pixel detector Timepix. In the current work, we describe the main performances of the Timepix detector operation at the SPS and H8 extracted beamline at CERN. We are presenting some detector calibration results and tuning. Our research topics also cover a cluster analysis algorithm for the particle hits reconstruction. We describe the optimal acquisition setup for the Timepix device and the edges of its functionality for the high energy and flux beam monitoring. The measurements of the crystal parameters are very important for the future bent crystal applications and needs a track reconstruction apparatus. Thus, it was decided to construct a short range (1.2 m long) particle telescope based on the Timepix sensors and test it at H8 SPS extraction beamline. The obtained results will be shown as well.

Keywords: beam monitoring, channeling, particle tracking, Timepix detector

Procedia PDF Downloads 180
1228 Literature Review: Adversarial Machine Learning Defense in Malware Detection

Authors: Leidy M. Aldana, Jorge E. Camargo

Abstract:

Adversarial Machine Learning has gained importance in recent years as Cybersecurity has gained too, especially malware, it has affected different entities and people in recent years. This paper shows a literature review about defense methods created to prevent adversarial machine learning attacks, firstable it shows an introduction about the context and the description of some terms, in the results section some of the attacks are described, focusing on detecting adversarial examples before coming to the machine learning algorithm and showing other categories that exist in defense. A method with five steps is proposed in the method section in order to define a way to make the literature review; in addition, this paper summarizes the contributions in this research field in the last seven years to identify research directions in this area. About the findings, the category with least quantity of challenges in defense is the Detection of adversarial examples being this one a viable research route with the adaptive approach in attack and defense.

Keywords: Malware, adversarial, machine learning, defense, attack

Procedia PDF Downloads 69
1227 Personalized Email Marketing Strategy: A Reinforcement Learning Approach

Authors: Lei Zhang, Tingting Xu, Jun He, Zhenyu Yan

Abstract:

Email marketing is one of the most important segments of online marketing. It has been proved to be the most effective way to acquire and retain customers. The email content is vital to customers. Different customers may have different familiarity with a product, so a successful marketing strategy must personalize email content based on individual customers’ product affinity. In this study, we build our personalized email marketing strategy with three types of emails: nurture, promotion, and conversion. Each type of email has a different influence on customers. We investigate this difference by analyzing customers’ open rates, click rates and opt-out rates. Feature importance from response models is also analyzed. The goal of the marketing strategy is to improve the click rate on conversion-type emails. To build the personalized strategy, we formulate the problem as a reinforcement learning problem and adopt a Q-learning algorithm with variations. The simulation results show that our model-based strategy outperforms the current marketer’s strategy.

Keywords: email marketing, email content, reinforcement learning, machine learning, Q-learning

Procedia PDF Downloads 195
1226 Harmonic Data Preparation for Clustering and Classification

Authors: Ali Asheibi

Abstract:

The rapid increase in the size of databases required to store power quality monitoring data has demanded new techniques for analysing and understanding the data. One suggested technique to assist in analysis is data mining. Preparing raw data to be ready for data mining exploration take up most of the effort and time spent in the whole data mining process. Clustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. Large amounts of harmonic data have been collected from an actual harmonic monitoring system in a distribution system in Australia for three years. This amount of acquired data makes it difficult to identify operational events that significantly impact the harmonics generated on the system. In this paper, harmonic data preparation processes to better understanding of the data have been presented. Underlying classes in this data has then been identified using clustering technique based on the Minimum Message Length (MML) method. The underlying operational information contained within the clusters can be rapidly visualised by the engineers. The C5.0 algorithm was used for classification and interpretation of the generated clusters.

Keywords: data mining, harmonic data, clustering, classification

Procedia PDF Downloads 249
1225 Implementation of Elliptic Curve Cryptography Encryption Engine on a FPGA

Authors: Mohamad Khairi Ishak

Abstract:

Conventional public key crypto systems such as RSA (Ron Rivest, Adi Shamir and Leonard Adleman), DSA (Digital Signature Algorithm), and Elgamal are no longer efficient to be implemented in the small, memory constrained devices. Elliptic Curve Cryptography (ECC), which allows smaller key length as compared to conventional public key crypto systems, has thus become a very attractive choice for many applications. This paper describes implementation of an elliptic curve cryptography (ECC) encryption engine on a FPGA. The system has been implemented in 2 different key sizes, which are 131 bits and 163 bits. Area and timing analysis are provided for both key sizes for comparison. The crypto system, which has been implemented on Altera’s EPF10K200SBC600-1, has a hardware size of 5945/9984 and 6913/9984 of logic cells for 131 bits implementation and 163 bits implementation respectively. The crypto system operates up to 43 MHz, and performs point multiplication operation in 11.3 ms for 131 bits implementation and 14.9 ms for 163 bits implementation. In terms of speed, our crypto system is about 8 times faster than the software implementation of the same system.

Keywords: elliptic curve cryptography, FPGA, key sizes, memory

Procedia PDF Downloads 325
1224 Parameters Optimization of the Laminated Composite Plate for Sound Transmission Problem

Authors: Yu T. Tsai, Jin H. Huang

Abstract:

In this paper, the specific sound transmission loss (TL) of the laminated composite plate (LCP) with different material properties in each layer is investigated. The numerical method to obtain the TL of the LCP is proposed by using elastic plate theory. The transfer matrix approach is novelty presented for computational efficiency in solving the numerous layers of dynamic stiffness matrix (D-matrix) of the LCP. Besides the numerical simulations for calculating the TL of the LCP, the material properties inverse method is presented for the design of a laminated composite plate analogous to a metallic plate with a specified TL. As a result, it demonstrates that the proposed computational algorithm exhibits high efficiency with a small number of iterations for achieving the goal. This method can be effectively employed to design and develop tailor-made materials for various applications.

Keywords: sound transmission loss, laminated composite plate, transfer matrix approach, inverse problem, elastic plate theory, material properties

Procedia PDF Downloads 388
1223 Application of Artificial Neural Network for Prediction of High Tensile Steel Strands in Post-Tensioned Slabs

Authors: Gaurav Sancheti

Abstract:

This study presents an impacting approach of Artificial Neural Networks (ANNs) in determining the quantity of High Tensile Steel (HTS) strands required in post-tensioned (PT) slabs. Various PT slab configurations were generated by varying the span and depth of the slab. For each of these slab configurations, quantity of required HTS strands were recorded. ANNs with backpropagation algorithm and varying architectures were developed and their performance was evaluated in terms of Mean Square Error (MSE). The recorded data for the quantity of HTS strands was used as a feeder database for training the developed ANNs. The networks were validated using various validation techniques. The results show that the proposed ANNs have a great potential with good prediction and generalization capability.

Keywords: artificial neural networks, back propagation, conceptual design, high tensile steel strands, post tensioned slabs, validation techniques

Procedia PDF Downloads 222
1222 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle

Authors: Ryan Messina, Mehedi Hasan

Abstract:

This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.

Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking

Procedia PDF Downloads 208
1221 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 374
1220 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 232
1219 Design and Motion Control of a Two-Wheel Inverted Pendulum Robot

Authors: Shiuh-Jer Huang, Su-Shean Chen, Sheam-Chyun Lin

Abstract:

Two-wheel inverted pendulum robot (TWIPR) is designed with two-hub DC motors for human riding and motion control evaluation. In order to measure the tilt angle and angular velocity of the inverted pendulum robot, accelerometer and gyroscope sensors are chosen. The mobile robot’s moving position and velocity were estimated based on DC motor built in hall sensors. The control kernel of this electric mobile robot is designed with embedded Arduino Nano microprocessor. A handle bar was designed to work as steering mechanism. The intelligent model-free fuzzy sliding mode control (FSMC) was employed as the main control algorithm for this mobile robot motion monitoring with different control purpose adjustment. The intelligent controllers were designed for balance control, and moving speed control purposes of this robot under different operation conditions and the control performance were evaluated based on experimental results.

Keywords: balance control, speed control, intelligent controller, two wheel inverted pendulum

Procedia PDF Downloads 224
1218 Research on Development and Accuracy Improvement of an Explosion Proof Combustible Gas Leak Detector Using an IR Sensor

Authors: Gyoutae Park, Seungho Han, Byungduk Kim, Youngdo Jo, Yongsop Shim, Yeonjae Lee, Sangguk Ahn, Hiesik Kim, Jungil Park

Abstract:

In this paper, we presented not only development technology of an explosion proof type and portable combustible gas leak detector but also algorithm to improve accuracy for measuring gas concentrations. The presented techniques are to apply the flame-proof enclosure and intrinsic safe explosion proof to an infrared gas leak detector at first in Korea and to improve accuracy using linearization recursion equation and Lagrange interpolation polynomial. Together, we tested sensor characteristics and calibrated suitable input gases and output voltages. Then, we advanced the performances of combustible gaseous detectors through reflecting demands of gas safety management fields. To check performances of two company's detectors, we achieved the measurement tests with eight standard gases made by Korea Gas Safety Corporation. We demonstrated our instruments better in detecting accuracy other than detectors through experimental results.

Keywords: accuracy improvement, IR gas sensor, gas leak, detector

Procedia PDF Downloads 391
1217 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 265
1216 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 258
1215 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 285
1214 Numerical Model for Investigation of Recombination Mechanisms in Graphene-Bonded Perovskite Solar Cells

Authors: Amir Sharifi Miavaghi

Abstract:

It is believed recombination mechnisms in graphene-bonded perovskite solar cells based on numerical model in which doped-graphene structures are employed as anode/cathode bonding semiconductor. Moreover, th‌‌‌‌e da‌‌‌‌‌rk-li‌‌‌‌‌ght c‌‌‌‌urrent d‌‌‌‌ens‌‌‌‌ity-vo‌‌‌‌‌‌‌ltage density-voltage cu‌‌‌‌‌‌‌‌‌‌‌rves are investigated by regression analysis. L‌‌‌oss m‌‌‌‌echa‌‌‌‌nisms suc‌‌‌h a‌‌‌‌‌‌s ba‌‌‌‌ck c‌‌‌ontact b‌‌‌‌‌arrier, d‌‌‌‌eep surface defect i‌‌‌‌n t‌‌‌‌‌‌‌he adsorbent la‌‌‌yer is det‌‌‌‌‌ermined b‌‌‌y adapting th‌‌‌e sim‌‌‌‌‌ulated ce‌‌‌‌‌ll perfor‌‌‌‌‌mance to t‌‌‌‌he measure‌‌‌‌ments us‌‌‌‌ing the diffe‌‌‌‌‌‌rential evolu‌‌‌‌‌tion of th‌‌‌‌e global optimization algorithm. T‌‌‌‌he performance of t‌‌‌he c‌‌‌‌ell i‌‌‌‌n the connection proc‌‌‌‌‌ess incl‌‌‌‌‌‌udes J-V cur‌‌‌‌‌‌ves that are examined at di‌‌‌‌‌fferent tempe‌‌‌‌‌‌‌ratures an‌‌‌d op‌‌‌‌en cir‌‌‌‌cuit vol‌‌‌‌tage (V) und‌‌‌‌er differ‌‌‌‌‌ent light intensities as a function of temperature. Ba‌‌‌‌sed o‌‌‌n t‌‌‌he prop‌‌‌‌osed nu‌‌‌‌‌merical mod‌‌‌‌el a‌‌‌‌nd the acquired lo‌‌‌‌ss mecha‌‌‌‌‌‌nisms, our approach can be used to improve the efficiency of the solar cell further. Due to the high demand for alternative energy sources, solar cells are good alternatives for energy storage using the photovoltaic phenomenon.

Keywords: numerical model, recombination mechanism, graphen, perovskite solarcell

Procedia PDF Downloads 71
1213 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors

Authors: V. Rashtchi, H. Bizhani, F. R. Tatari

Abstract:

This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.

Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization

Procedia PDF Downloads 634
1212 Spectral Anomaly Detection and Clustering in Radiological Search

Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk

Abstract:

Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.

Keywords: radiological search, radiological mapping, radioactivity, radiation protection

Procedia PDF Downloads 696
1211 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 137
1210 The Acceptable Roles of Artificial Intelligence in the Judicial Reasoning Process

Authors: Sonia Anand Knowlton

Abstract:

There are some cases where we as a society feel deeply uncomfortable with the use of Artificial Intelligence (AI) tools in the judicial decision-making process, and justifiably so. A perfect example is COMPAS, an algorithmic model that predicts recidivism rates of offenders to assist in the determination of their bail conditions. COMPAS turned out to be extremely racist: it massively overpredicted recidivism rates of Black offenders and underpredicted recidivism rates of white offenders. At the same time, there are certain uses of AI in the judicial decision-making process that many would feel more comfortable with and even support. Take, for example, a “super-breathalyzer,” an (albeit imaginary) tool that uses AI to deliver highly detailed information about the subject of the breathalyzer test to the legal decision-makers analyzing their drunk-driving case. This article evaluates the point at which a judge’s use of AI tools begins to undermine the public’s trust in the administration of justice. It argues that the answer to this question depends on whether the AI tool is in a role in which it must perform a moral evaluation of a human being.

Keywords: artificial intelligence, judicial reasoning, morality, technology, algorithm

Procedia PDF Downloads 86
1209 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan

Abstract:

High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Keywords: RADAR, RCS, high performance computing, point scatterer model

Procedia PDF Downloads 192
1208 An Improved Mesh Deformation Method Based on Radial Basis Function

Authors: Xuan Zhou, Litian Zhang, Shuixiang Li

Abstract:

Mesh deformation using radial basis function interpolation method has been demonstrated to produce quality meshes with relatively little computational cost using a concise algorithm. However, it still suffers from the limited deformation ability, especially in large deformation. In this paper, a pre-displacement improvement is proposed to improve the problem that illegal meshes always appear near the moving inner boundaries owing to the large relative displacement of the nodes near inner boundaries. In this improvement, nodes near the inner boundaries are first associated to the near boundary nodes, and a pre-displacement based on the displacements of associated boundary nodes is added to the nodes near boundaries in order to make the displacement closer to the boundary deformation and improve the deformation capability. Several 2D and 3D numerical simulation cases have shown that the pre-displacement improvement for radial basis function (RBF) method significantly improves the mesh quality near inner boundaries and deformation capability, with little computational burden increasement.

Keywords: mesh deformation, mesh quality, background mesh, radial basis function

Procedia PDF Downloads 366
1207 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 359
1206 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 347
1205 PEA Design of the Direct Control for Training Motor Drives

Authors: Abdulatif Abdulsalam Mohamed Shaban

Abstract:

This paper states that the art of Procedure Entry Array (PEA) plan with a focus on control system applications. This paper begins with an impression of PEA technology development, followed by an arrangement of design technologies, and the use of programmable description languages and system-level design tools. They allow a practical approach based on a unique model for complete engineering electronics systems. There are three main design rules are implemented in the system. These are algorithm based fine-tuning, modularity, and the control act and the architectural constraints. An overview of contributions and limits of PEAs is also given, followed by a short survey of PEA-based gifted controllers for recent engineering systems. Finally, two complete and timely case studies are presented to illustrate the benefits of a PEA implementation when using the proposed system modelling and devise attitude. These consist of the direct control for training motor drives and the control of a diesel-driven stand-alone generator with the help of logical design.

Keywords: control (DC), engineering electronics systems, training motor drives, procedure entry array

Procedia PDF Downloads 515
1204 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 254
1203 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings

Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies

Abstract:

With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.

Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries

Procedia PDF Downloads 448
1202 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations

Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova

Abstract:

The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.

Keywords: computed tomography, non-convex, sparse-view reconstruction, L1-L2 minimization, difference of convex functions

Procedia PDF Downloads 316
1201 Biometric Identification with Latitude and Longitude Fingerprint Verification for Attendance

Authors: Muhammad Fezan Afzal, Imran Khan, Salma Imtiaz

Abstract:

The need for human verification and identification requires from centuries for authentication. Since it is being used in big institutes like financial, government and crime departments, a continued struggle is important to make this system more efficient to prevent security breaches. Therefore, multiple devices are used to authenticate the biometric for each individual. A large number of devices are required to cover a large number of users. As the number of devices increases, cost will automatically increase. Furthermore, it is time-consuming for biometrics due to the devices being insufficient and are not available at every door. In this paper, we propose the framework and algorithm where the mobile of each individual can also perform the biometric authentication of attendance and security. Every mobile has a biometric authentication system that is used in different mobile applications for security purposes. Therefore, each individual can use the biometric system mobile without moving from one place to another. Moreover, by using the biometrics mobile, the cost of biometric systems can be removed that are mostly deployed in different organizations for the attendance of students, employees and for other security purposes.

Keywords: fingerprint, fingerprint authentication, mobile verification, mobile biometric verification, mobile fingerprint sensor

Procedia PDF Downloads 70