Search results for: fast-regional convolutional neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3626

Search results for: fast-regional convolutional neural networks

2786 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 39
2785 Suitable Models and Methods for the Steady-State Analysis of Multi-Energy Networks

Authors: Juan José Mesas, Luis Sainz

Abstract:

The motivation for the development of this paper lies in the need for energy networks to reduce losses, improve performance, optimize their operation and try to benefit from the interconnection capacity with other networks enabled for other energy carriers. These interconnections generate interdependencies between some energy networks and others, which requires suitable models and methods for their analysis. Traditionally, the modeling and study of energy networks have been carried out independently for each energy carrier. Thus, there are well-established models and methods for the steady-state analysis of electrical networks, gas networks, and thermal networks separately. What is intended is to extend and combine them adequately to be able to face in an integrated way the steady-state analysis of networks with multiple energy carriers. Firstly, the added value of multi-energy networks, their operation, and the basic principles that characterize them are explained. In addition, two current aspects of great relevance are exposed: the storage technologies and the coupling elements used to interconnect one energy network with another. Secondly, the characteristic equations of the different energy networks necessary to carry out the steady-state analysis are detailed. The electrical network, the natural gas network, and the thermal network of heat and cold are considered in this paper. After the presentation of the equations, a particular case of the steady-state analysis of a specific multi-energy network is studied. This network is represented graphically, the interconnections between the different energy carriers are described, their technical data are exposed and the equations that have previously been presented theoretically are formulated and developed. Finally, the two iterative numerical resolution methods considered in this paper are presented, as well as the resolution procedure and the results obtained. The pros and cons of the application of both methods are explained. It is verified that the results obtained for the electrical network (voltages in modulus and angle), the natural gas network (pressures), and the thermal network (mass flows and temperatures) are correct since they comply with the distribution, operation, consumption and technical characteristics of the multi-energy network under study.

Keywords: coupling elements, energy carriers, multi-energy networks, steady-state analysis

Procedia PDF Downloads 59
2784 An Energy Efficient Clustering Approach for Underwater ‎Wireless Sensor Networks

Authors: Mohammad Reza Taherkhani‎

Abstract:

Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make a connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.

Keywords: underwater sensor networks, clustering, learning automata, energy consumption

Procedia PDF Downloads 341
2783 Flow Conservation Framework for Monitoring Software Defined Networks

Authors: Jesús Antonio Puente Fernández, Luis Javier Garcia Villalba

Abstract:

New trends on streaming videos such as series or films require a high demand of network resources. This fact results in a huge problem within traditional IP networks due to the rigidity of its architecture. In this way, Software Defined Networks (SDN) is a new concept of network architecture that intends to be more flexible and it simplifies the management in networks with respect to the existing ones. These aspects are possible due to the separation of control plane (controller) and data plane (switches). Taking the advantage of this separated control, it is easy to deploy a monitoring tool independent of device vendors since the existing ones are dependent on the installation of specialized and expensive hardware. In this paper, we propose a framework that optimizes the traffic monitoring in SDN networks that decreases the number of monitoring queries to improve the network traffic and also reduces the overload. The performed experiments (with and without the optimization) using a video streaming delivery between two hosts demonstrate the feasibility of our monitoring proposal.

Keywords: optimization, monitoring, software defined networking, statistics, query

Procedia PDF Downloads 307
2782 Using Dynamic Bayesian Networks to Characterize and Predict Job Placement

Authors: Xupin Zhang, Maria Caterina Bramati, Enrest Fokoue

Abstract:

Understanding the career placement of graduates from the university is crucial for both the qualities of education and ultimate satisfaction of students. In this research, we adapt the capabilities of dynamic Bayesian networks to characterize and predict students’ job placement using data from various universities. We also provide elements of the estimation of the indicator (score) of the strength of the network. The research focuses on overall findings as well as specific student groups including international and STEM students and their insight on the career path and what changes need to be made. The derived Bayesian network has the potential to be used as a tool for simulating the career path for students and ultimately helps universities in both academic advising and career counseling.

Keywords: dynamic bayesian networks, indicator estimation, job placement, social networks

Procedia PDF Downloads 350
2781 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature

Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon

Abstract:

Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.

Keywords: deep-learning, altimetry, sea surface temperature, forecast

Procedia PDF Downloads 71
2780 A Tutorial on Network Security: Attacks and Controls

Authors: Belbahi Ahlam

Abstract:

With the phenomenal growth in the Internet, network security has become an integral part of computer and information security. In order to come up with measures that make networks more secure, it is important to learn about the vulnerabilities that could exist in a computer network and then have an understanding of the typical attacks that have been carried out in such networks. The first half of this paper will expose the readers to the classical network attacks that have exploited the typical vulnerabilities of computer networks in the past and solutions that have been adopted since then to prevent or reduce the chances of some of these attacks. The second half of the paper will expose the readers to the different network security controls including the network architecture, protocols, standards and software/ hardware tools that have been adopted in modern day computer networks.

Keywords: network security, attacks and controls, computer and information, solutions

Procedia PDF Downloads 430
2779 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 59
2778 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction

Authors: Mingxin Li, Liya Ni

Abstract:

Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.

Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning

Procedia PDF Downloads 121
2777 Dimensioning of Circuit Switched Networks by Using Simulation Code Based On Erlang (B) Formula

Authors: Ali Mustafa Elshawesh, Mohamed Abdulali

Abstract:

The paper presents an approach to dimension circuit switched networks and find the relationship between the parameters of the circuit switched networks on the condition of specific probability of call blocking. Our work is creating a Simulation code based on Erlang (B) formula to draw graphs which show two curves for each graph; one of simulation and the other of calculated. These curves represent the relationships between average number of calls and average call duration with the probability of call blocking. This simulation code facilitates to select the appropriate parameters for circuit switched networks.

Keywords: Erlang B formula, call blocking, telephone system dimension, Markov model, link capacity

Procedia PDF Downloads 587
2776 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 323
2775 The Role of Online Social Networks in Social Movements: Social Polarization and Violations against Social Unity and Privacy of Individuals in Turkey

Authors: Tolga Yazıcı

Abstract:

As a matter of the fact that online social networks like Twitter, Facebook and MySpace have experienced an extensive growth in recent years. Social media offers individuals with a tool for communicating and interacting with one another. These social networks enable people to stay in touch with other people and express themselves. This process makes the users of online social networks active creators of content rather than being only consumers of traditional media. That’s why millions of people show strong desire to learn the methods and tools of digital content production and necessary communication skills. However, the booming interest in communication and interaction through online social networks and high level of eagerness to invent and implement the ways to participate in content production raise some privacy and security concerns. This presentation aims to open the assumed revolutionary, democratic and liberating nature of the online social media up for discussion by reviewing some recent political developments in Turkey. Firstly, the role of Internet and online social networks in mobilizing collective movements through social interactions and communications will be questioned. Secondly, some cases from Gezi and Okmeydanı Protests and also December 17-25 period will be presented in order to illustrate misinformation and manipulation in social media and violation of individual privacy through online social networks in order to damage social unity and stability contradictory to democratic nature of online social networking.

Keywords: online social media networks, democratic participation, social movements, social polarization, privacy of individuals, Turkey

Procedia PDF Downloads 322
2774 Unsupervised Neural Architecture for Saliency Detection

Authors: Natalia Efremova, Sergey Tarasenko

Abstract:

We propose a novel neural network architecture for visual saliency detections, which utilizes neuro physiologically plausible mechanisms for extraction of salient regions. The model has been significantly inspired by recent findings from neuro physiology and aimed to simulate the bottom-up processes of human selective attention. Two types of features were analyzed: color and direction of maximum variance. The mechanism we employ for processing those features is PCA, implemented by means of normalized Hebbian learning and the waves of spikes. To evaluate performance of our model we have conducted psychological experiment. Comparison of simulation results with those of experiment indicates good performance of our model.

Keywords: neural network models, visual saliency detection, normalized Hebbian learning, Oja's rule, psychological experiment

Procedia PDF Downloads 327
2773 Neuron-Based Control Mechanisms for a Robotic Arm and Hand

Authors: Nishant Singh, Christian Huyck, Vaibhav Gandhi, Alexander Jones

Abstract:

A robotic arm and hand controlled by simulated neurons is presented. The robot makes use of a biological neuron simulator using a point neural model. The neurons and synapses are organised to create a finite state automaton including neural inputs from sensors, and outputs to effectors. The robot performs a simple pick-and-place task. This work is a proof of concept study for a longer term approach. It is hoped that further work will lead to more effective and flexible robots. As another benefit, it is hoped that further work will also lead to a better understanding of human and other animal neural processing, particularly for physical motion. This is a multidisciplinary approach combining cognitive neuroscience, robotics, and psychology.

Keywords: cell assembly, force sensitive resistor, robot, spiking neuron

Procedia PDF Downloads 333
2772 Application of Artificial Neural Network for Single Horizontal Bare Tube and Bare Tube Bundles (Staggered) of Large Particles: Heat Transfer Prediction

Authors: G. Ravindranath, S. Savitha

Abstract:

This paper presents heat transfer analysis of single horizontal bare tube and heat transfer analysis of staggered arrangement of bare tube bundles bare tube bundles in gas-solid (air-solid) fluidized bed and predictions are done by using Artificial Neural Network (ANN) based on experimental data. Fluidized bed provide nearly isothermal environment with high heat transfer rate to submerged objects i.e. due to through mixing and large contact area between the gas and the particle, a fully fluidized bed has little temperature variation and gas leaves at a temperature which is close to that of the bed. Measurement of average heat transfer coefficient was made by local thermal simulation technique in a cold bubbling air-fluidized bed of size 0.305 m. x 0.305 m. Studies were conducted for single horizontal Bare Tube of length 305mm and 28.6mm outer diameter and for bare tube bundles of staggered arrangement using beds of large (average particle diameter greater than 1 mm) particle (raagi and mustard). Within the range of experimental conditions influence of bed particle diameter ( Dp), Fluidizing Velocity (U) were studied, which are significant parameters affecting heat transfer. Artificial Neural Networks (ANNs) have been receiving an increasing attention for simulating engineering systems due to some interesting characteristics such as learning capability, fault tolerance, and non-linearity. Here, feed-forward architecture and trained by back-propagation technique is adopted to predict heat transfer analysis found from experimental results. The ANN is designed to suit the present system which has 3 inputs and 2 out puts. The network predictions are found to be in very good agreement with the experimental observed values of bare heat transfer coefficient (hb) and nusselt number of bare tube (Nub).

Keywords: fluidized bed, large particles, particle diameter, ANN

Procedia PDF Downloads 348
2771 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach

Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares

Abstract:

Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.

Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network

Procedia PDF Downloads 187
2770 JaCoText: A Pretrained Model for Java Code-Text Generation

Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri

Abstract:

Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.

Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks

Procedia PDF Downloads 248
2769 Life Prediction of Condenser Tubes Applying Fuzzy Logic and Neural Network Algorithms

Authors: A. Majidian

Abstract:

The life prediction of thermal power plant components is necessary to prevent the unexpected outages, optimize maintenance tasks in periodic overhauls and plan inspection tasks with their schedules. One of the main critical components in a power plant is condenser because its failure can affect many other components which are positioned in downstream of condenser. This paper deals with factors affecting life of condenser. Failure rates dependency vs. these factors has been investigated using Artificial Neural Network (ANN) and fuzzy logic algorithms. These algorithms have shown their capabilities as dynamic tools to evaluate life prediction of power plant equipments.

Keywords: life prediction, condenser tube, neural network, fuzzy logic

Procedia PDF Downloads 333
2768 3D Plant Growth Measurement System Using Deep Learning Technology

Authors: Kazuaki Shiraishi, Narumitsu Asai, Tsukasa Kitahara, Sosuke Mieno, Takaharu Kameoka

Abstract:

The purpose of this research is to facilitate productivity advances in agriculture. To accomplish this, we developed an automatic three-dimensional (3D) recording system for growth of field crops that consists of a number of inexpensive modules: a very low-cost stereo camera, a couple of ZigBee wireless modules, a Raspberry Pi single-board computer, and a third generation (3G) wireless communication module. Our system uses an inexpensive Web stereo camera in order to keep total costs low. However, inexpensive video cameras record low-resolution images that are very noisy. Accordingly, in order to resolve these problems, we adopted a deep learning method. Based on the results of extended period of time operation test conducted without the use of an external power supply, we found that by using Super-Resolution Convolutional Neural Network method, our system could achieve a balance between the competing goals of low-cost and superior performance. Our experimental results showed the effectiveness of our system.

Keywords: 3D plant data, automatic recording, stereo camera, deep learning, image processing

Procedia PDF Downloads 257
2767 Neural Network Monitoring Strategy of Cutting Tool Wear of Horizontal High Speed Milling

Authors: Kious Mecheri, Hadjadj Abdechafik, Ameur Aissa

Abstract:

The wear of cutting tool degrades the quality of the product in the manufacturing processes. The online monitoring of the cutting tool wear level is very necessary to prevent the deterioration of the quality of machining. Unfortunately there is not a direct manner to measure the cutting tool wear online. Consequently we must adopt an indirect method where wear will be estimated from the measurement of one or more physical parameters appearing during the machining process such as the cutting force, the vibrations, or the acoustic emission etc. In this work, a neural network system is elaborated in order to estimate the flank wear from the cutting force measurement and the cutting conditions.

Keywords: flank wear, cutting forces, high speed milling, signal processing, neural network

Procedia PDF Downloads 375
2766 Smart Technology for Hygrothermal Performance of Low Carbon Material Using an Artificial Neural Network Model

Authors: Manal Bouasria, Mohammed-Hichem Benzaama, Valérie Pralong, Yassine El Mendili

Abstract:

Reducing the quantity of cement in cementitious composites can help to reduce the environmental effect of construction materials. By-products such as ferronickel slags (FNS), fly ash (FA), and Crepidula fornicata (CR) are promising options for cement replacement. In this work, we investigated the relevance of substituting cement with FNS-CR and FA-CR on the mechanical properties of mortar and on the thermal properties of concrete. Foraging intervals ranging from 2 to 28 days, the mechanical properties are obtained by 3-point bending and compression tests. The chosen mix is used to construct a prototype in order to study the material’s hygrothermal performance. The data collected by the sensors placed on the prototype was utilized to build an artificial neural network.

Keywords: artificial neural network, cement, circular economy, concrete, by products

Procedia PDF Downloads 95
2765 ANN Based Simulation of PWM Scheme for Seven Phase Voltage Source Inverter Using MATLAB/Simulink

Authors: Mohammad Arif Khan

Abstract:

This paper analyzes and presents the development of Artificial Neural Network based controller of space vector modulation (ANN-SVPWM) for a seven-phase voltage source inverter. At first, the conventional method of producing sinusoidal output voltage by utilizing six active and one zero space vectors are used to synthesize the input reference, is elaborated and then new PWM scheme called Artificial Neural Network Based PWM is presented. The ANN based controller has the advantage of the very fast implementation and analyzing the algorithms and avoids the direct computation of trigonometric and non-linear functions. The ANN controller uses the individual training strategy with the fixed weight and supervised models. A computer simulation program has been developed using Matlab/Simulink together with the neural network toolbox for training the ANN-controller. A comparison of the proposed scheme with the conventional scheme is presented based on various performance indices. Extensive Simulation results are provided to validate the findings.

Keywords: space vector PWM, total harmonic distortion, seven-phase, voltage source inverter, multi-phase, artificial neural network

Procedia PDF Downloads 436
2764 Experimental Networks Synchronization of Chua’s Circuit in Different Topologies

Authors: Manuel Meranza-Castillon, Rolando Diaz-Castillo, Adrian Arellano-Delgado, Cesar Cruz-Hernandez, Rosa Martha Lopez-Gutierrez

Abstract:

In this work, we deal with experimental network synchronization of chaotic nodes with different topologies. Our approach is based on complex system theory, and we use a master-slave configuration to couple the nodes in the networks. In particular, we design and implement electronically complex dynamical networks composed by nine coupled chaotic Chua’s circuits with topologies: in nearest-neighbor, small-world, open ring, star, and global. Also, network synchronization is evaluated according to a particular coupling strength for each topology. This study is important by the possible applications to private transmission of information in a chaotic communication network of multiple users.

Keywords: complex networks, Chua's circuit, experimental synchronization, multiple users

Procedia PDF Downloads 324
2763 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake

Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi

Abstract:

Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.

Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control

Procedia PDF Downloads 132
2762 Artificial Neural Network in Predicting the Soil Response in the Discrete Element Method Simulation

Authors: Zhaofeng Li, Jun Kang Chow, Yu-Hsing Wang

Abstract:

This paper attempts to bridge the soil properties and the mechanical response of soil in the discrete element method (DEM) simulation. The artificial neural network (ANN) was therefore adopted, aiming to reproduce the stress-strain-volumetric response when soil properties are given. 31 biaxial shearing tests with varying soil parameters (e.g., initial void ratio and interparticle friction coefficient) were generated using the DEM simulations. Based on these 45 sets of training data, a three-layer neural network was established which can output the entire stress-strain-volumetric curve during the shearing process from the input soil parameters. Beyond the training data, 2 additional sets of data were generated to examine the validity of the network, and the stress-strain-volumetric curves for both cases were well reproduced using this network. Overall, the ANN was found promising in predicting the soil behavior and reducing repetitive simulation work.

Keywords: artificial neural network, discrete element method, soil properties, stress-strain-volumetric response

Procedia PDF Downloads 377
2761 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network

Authors: Hossain A., Chowdhury S. I.

Abstract:

Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.

Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera

Procedia PDF Downloads 83
2760 Multiple Query Optimization in Wireless Sensor Networks Using Data Correlation

Authors: Elaheh Vaezpour

Abstract:

Data sensing in wireless sensor networks is done by query deceleration the network by the users. In many applications of the wireless sensor networks, many users send queries to the network simultaneously. If the queries are processed separately, the network’s energy consumption will increase significantly. Therefore, it is very important to aggregate the queries before sending them to the network. In this paper, we propose a multiple query optimization framework based on sensors physical and temporal correlation. In the proposed method, queries are merged and sent to network by considering correlation among the sensors in order to reduce the communication cost between the sensors and the base station.

Keywords: wireless sensor networks, multiple query optimization, data correlation, reducing energy consumption

Procedia PDF Downloads 315
2759 A Deep Learning Based Method for Faster 3D Structural Topology Optimization

Authors: Arya Prakash Padhi, Anupam Chakrabarti, Rajib Chowdhury

Abstract:

Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved.

Keywords: 3D generative adversarial network, deep learning, structural topology optimization, variational auto encoder

Procedia PDF Downloads 155
2758 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 153
2757 Uplink Throughput Prediction in Cellular Mobile Networks

Authors: Engin Eyceyurt, Josko Zec

Abstract:

The current and future cellular mobile communication networks generate enormous amounts of data. Networks have become extremely complex with extensive space of parameters, features and counters. These networks are unmanageable with legacy methods and an enhanced design and optimization approach is necessary that is increasingly reliant on machine learning. This paper proposes that machine learning as a viable approach for uplink throughput prediction. LTE radio metric, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and Signal to Noise Ratio (SNR) are used to train models to estimate expected uplink throughput. The prediction accuracy with high determination coefficient of 91.2% is obtained from measurements collected with a simple smartphone application.

Keywords: drive test, LTE, machine learning, uplink throughput prediction

Procedia PDF Downloads 136