Search results for: Bayesian neural network
4037 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video
Authors: Nidhal K. Azawi, John M. Gauch
Abstract:
Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.Keywords: colonoscopy classification, feature extraction, image alignment, machine learning
Procedia PDF Downloads 2534036 A POX Controller Module to Prepare a List of Flow Header Information Extracted from SDN Traffic
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a paradigm designed to facilitate the way of controlling the network dynamically and with more agility. Network traffic is a set of flows, each of which contains a set of packets. In SDN, a matching process is performed on every packet coming to the network in the SDN switch. Only the headers of the new packets will be forwarded to the SDN controller. In terminology, the flow header fields are called tuples. Basically, these tuples are 5-tuple: the source and destination IP addresses, source and destination ports, and protocol number. This flow information is used to provide an overview of the network traffic. Our module is meant to extract this 5-tuple with the packets and flows numbers and show them as a list. Therefore, this list can be used as a first step in the way of detecting the DDoS attack. Thus, this module can be considered as the beginning stage of any flow-based DDoS detection method.Keywords: matching, OpenFlow tables, POX controller, SDN, table-miss
Procedia PDF Downloads 1994035 Novel Recommender Systems Using Hybrid CF and Social Network Information
Authors: Kyoung-Jae Kim
Abstract:
Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models.Keywords: recommender systems, collaborative filtering, social network information, singular value decomposition
Procedia PDF Downloads 2894034 Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof
Authors: Cheikh Bamba Dione, Alla Lo, Elhadji Mamadou Nguer, Siley O. Ba
Abstract:
In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality.Keywords: backtranslation, low-resource language, neural machine translation, sequence-to-sequence, transformer, Wolof
Procedia PDF Downloads 1474033 Minimization of Propagation Delay in Multi Unmanned Aerial Vehicle Network
Authors: Purva Joshi, Rohit Thanki, Omar Hanif
Abstract:
Unmanned aerial vehicles (UAVs) are becoming increasingly important in various industrial applications and sectors. Nowadays, a multi UAV network is used for specific types of communication (e.g., military) and monitoring purposes. Therefore, it is critical to reducing propagation delay during communication between UAVs, which is essential in a multi UAV network. This paper presents how the propagation delay between the base station (BS) and the UAVs is reduced using a searching algorithm. Furthermore, the iterative-based K-nearest neighbor (k-NN) algorithm and Travelling Salesmen Problem (TSP) algorthm were utilized to optimize the distance between BS and individual UAV to overcome the problem of propagation delay in multi UAV networks. The simulation results show that this proposed method reduced complexity, improved reliability, and reduced propagation delay in multi UAV networks.Keywords: multi UAV network, optimal distance, propagation delay, K - nearest neighbor, traveling salesmen problem
Procedia PDF Downloads 2014032 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 1224031 Artificial Neurons Based on Memristors for Spiking Neural Networks
Authors: Yan Yu, Wang Yu, Chen Xintong, Liu Yi, Zhang Yanzhong, Wang Yanji, Chen Xingyu, Zhang Miaocheng, Tong Yi
Abstract:
Neuromorphic computing based on spiking neural networks (SNNs) has emerged as a promising avenue for building the next generation of intelligent computing systems. Owing to its high-density integration, low power, and outstanding nonlinearity, memristors have attracted emerging attention on achieving SNNs. However, fabricating a low-power and robust memristor-based spiking neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a TiO₂-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, used to realize single layer fully connected (FC) SNNs. Moreover, our TiO₂-based resistive switching (RS) memristors realize spiking-time-dependent-plasticity (STDP), originating from the Ag diffusion-based filamentary mechanism. This work demonstrates that TiO2-based memristors may provide an efficient method to construct hardware neuromorphic computing systems.Keywords: leaky integrate-and-fire, memristor, spiking neural networks, spiking-time-dependent-plasticity
Procedia PDF Downloads 1344030 Dynamical Relation of Poisson Spike Trains in Hodkin-Huxley Neural Ion Current Model and Formation of Non-Canonical Bases, Islands, and Analog Bases in DNA, mRNA, and RNA at or near the Transcription
Authors: Michael Fundator
Abstract:
Groundbreaking application of biomathematical and biochemical research in neural networks processes to formation of non-canonical bases, islands, and analog bases in DNA and mRNA at or near the transcription that contradicts the long anticipated statistical assumptions for the distribution of bases and analog bases compounds is implemented through statistical and stochastic methods apparatus with addition of quantum principles, where the usual transience of Poisson spike train becomes very instrumental tool for finding even almost periodical type of solutions to Fokker-Plank stochastic differential equation. Present article develops new multidimensional methods of finding solutions to stochastic differential equations based on more rigorous approach to mathematical apparatus through Kolmogorov-Chentsov continuity theorem that allows the stochastic processes with jumps under certain conditions to have γ-Holder continuous modification that is used as basis for finding analogous parallels in dynamics of neutral networks and formation of analog bases and transcription in DNA.Keywords: Fokker-Plank stochastic differential equation, Kolmogorov-Chentsov continuity theorem, neural networks, translation and transcription
Procedia PDF Downloads 4064029 Monitoring Memories by Using Brain Imaging
Authors: Deniz Erçelen, Özlem Selcuk Bozkurt
Abstract:
The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons
Procedia PDF Downloads 854028 Evaluating the Perception of Roma in Europe through Social Network Analysis
Authors: Giulia I. Pintea
Abstract:
The Roma people are a nomadic ethnic group native to India, and they are one of the most prevalent minorities in Europe. In the past, Roma were enslaved and they were imprisoned in concentration camps during the Holocaust; today, Roma are subject to hate crimes and are denied access to healthcare, education, and proper housing. The aim of this project is to analyze how the public perception of the Roma people may be influenced by antiziganist and pro-Roma institutions in Europe. In order to carry out this project, we used social network analysis to build two large social networks: The antiziganist network, which is composed of institutions that oppress and racialize Roma, and the pro-Roma network, which is composed of institutions that advocate for and protect Roma rights. Measures of centrality, density, and modularity were obtained to determine which of the two social networks is exerting the greatest influence on the public’s perception of Roma in European societies. Furthermore, data on hate crimes on Roma were gathered from the Organization for Security and Cooperation in Europe (OSCE). We analyzed the trends in hate crimes on Roma for several European countries for 2009-2015 in order to see whether or not there have been changes in the public’s perception of Roma, thus helping us evaluate which of the two social networks has been more influential. Overall, the results suggest that there is a greater and faster exchange of information in the pro-Roma network. However, when taking the hate crimes into account, the impact of the pro-Roma institutions is ambiguous, due to differing patterns among European countries, suggesting that the impact of the pro-Roma network is inconsistent. Despite antiziganist institutions having a slower flow of information, the hate crime patterns also suggest that the antiziganist network has a higher impact on certain countries, which may be due to institutions outside the political sphere boosting the spread of antiziganist ideas and information to the European public.Keywords: applied mathematics, oppression, Roma people, social network analysis
Procedia PDF Downloads 2774027 Lightweight Hybrid Convolutional and Recurrent Neural Networks for Wearable Sensor Based Human Activity Recognition
Authors: Sonia Perez-Gamboa, Qingquan Sun, Yan Zhang
Abstract:
Non-intrusive sensor-based human activity recognition (HAR) is utilized in a spectrum of applications, including fitness tracking devices, gaming, health care monitoring, and smartphone applications. Deep learning models such as convolutional neural networks (CNNs) and long short term memory (LSTM) recurrent neural networks (RNNs) provide a way to achieve HAR accurately and effectively. In this paper, we design a multi-layer hybrid architecture with CNN and LSTM and explore a variety of multi-layer combinations. Based on the exploration, we present a lightweight, hybrid, and multi-layer model, which can improve the recognition performance by integrating local features and scale-invariant with dependencies of activities. The experimental results demonstrate the efficacy of the proposed model, which can achieve a 94.7% activity recognition rate on a benchmark human activity dataset. This model outperforms traditional machine learning and other deep learning methods. Additionally, our implementation achieves a balance between recognition rate and training time consumption.Keywords: deep learning, LSTM, CNN, human activity recognition, inertial sensor
Procedia PDF Downloads 1504026 The Nature and the Structure of Scientific and Innovative Collaboration Networks
Authors: Afshin Moazami, Andrea Schiffauerova
Abstract:
The objective of this work is to investigate the development and the role of collaboration networks in the creation of knowledge and innovations in the US and Canada, with a special focus on Quebec. In order to create scientific networks, the data on journal articles were extracted from SCOPUS, and the networks were built based on the co-authorship of the journal papers. For innovation networks, the USPTO database was used, and the networks were built on the patent co-inventorship. Various indicators characterizing the evolution of the network structure and the positions of the researchers and inventors in the networks were calculated. The comparison between the United States, Canada, and Quebec was then carried out. The preliminary results show that the nature of scientific collaboration networks differs from the one seen in innovation networks. Scientists work in bigger teams and are mostly interconnected within one giant network component, whereas the innovation network is much more clustered and fragmented, the inventors work more repetitively with the same partners, often in smaller isolated groups. In both Canada and the US, an increasing tendency towards collaboration was observed, and it was found that networks are getting bigger and more centralized with time. Moreover, a declining share of knowledge transfers per scientist was detected, suggesting an increasing specialization of science. The US collaboration networks tend to be more centralized than the Canadian ones. Quebec shares a lot of features with the Canadian network, but some differences were observed, for example, Quebec inventors rely more on the knowledge transmission through intermediaries.Keywords: Canada, collaboration, innovation network, scientific network, Quebec, United States
Procedia PDF Downloads 2014025 Energy Balance Routing to Enhance Network Performance in Wireless Sensor Network
Authors: G. Baraneedaran, Deepak Singh, Kollipara Tejesh
Abstract:
The wireless sensors network has been an active research area over the y-ear passed. Due to the limited energy and communication ability of sensor nodes, it seems especially important to design a routing protocol for WSNs so that sensing data can be transmitted to the receiver effectively, an energy-balanced routing method based on forward-aware factor (FAF-EBRM) is proposed in this paper. In FAF-EBRM, the next-hop node is selected according to the awareness of link weight and forward energy density. A spontaneous reconstruction mechanism for Local topology is designed additionally. In this experiment, FAF-EBRM is compared with LEACH and EECU, experimental results show that FAF-EBRM outperforms LEACH and EECU, which balances the energy consumption, prolongs the function lifetime and guarantees high Qos of WSN.Keywords: energy balance, forward-aware factor (FAF), forward energy density, link weight, network performance
Procedia PDF Downloads 5404024 A Taxonomy of Routing Protocols in Wireless Sensor Networks
Authors: A. Kardi, R. Zagrouba, M. Alqahtani
Abstract:
The Internet of Everything (IoE) presents today a very attractive and motivating field of research. It is basically based on Wireless Sensor Networks (WSNs) in which the routing task is the major analysis topic. In fact, it directly affects the effectiveness and the lifetime of the network. This paper, developed from recent works and based on extensive researches, proposes a taxonomy of routing protocols in WSNs. Our main contribution is that we propose a classification model based on nine classes namely application type, delivery mode, initiator of communication, network architecture, path establishment (route discovery), network topology (structure), protocol operation, next hop selection and latency-awareness and energy-efficient routing protocols. In order to provide a total classification pattern to serve as reference for network designers, each class is subdivided into possible subclasses, presented, and discussed using different parameters such as purposes and characteristics.Keywords: routing, sensor, survey, wireless sensor networks, WSNs
Procedia PDF Downloads 1824023 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 1854022 Evaluation of Security and Performance of Master Node Protocol in the Bitcoin Peer-To-Peer Network
Authors: Muntadher Sallal, Gareth Owenson, Mo Adda, Safa Shubbar
Abstract:
Bitcoin is a digital currency based on a peer-to-peer network to propagate and verify transactions. Bitcoin is gaining wider adoption than any previous crypto-currency. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a delay overhead in information propagation, which makes the system vulnerable to double-spend attacks. Aiming at alleviating the propagation delay problem, this paper introduces proximity-aware extensions to the current Bitcoin protocol, named Master Node Based Clustering (MNBC). The ultimate purpose of the proposed protocol, that are based on how clusters are formulated and how nodes can define their membership, is to improve the information propagation delay in the Bitcoin network. In MNBC protocol, physical internet connectivity increases, as well as the number of hops between nodes, decreases through assigning nodes to be responsible for maintaining clusters based on physical internet proximity. We show, through simulations, that the proposed protocol defines better clustering structures that optimize the performance of the transaction propagation over the Bitcoin protocol. The evaluation of partition attacks in the MNBC protocol, as well as the Bitcoin network, was done in this paper. Evaluation results prove that even though the Bitcoin network is more resistant against the partitioning attack than the MNBC protocol, more resources are needed to be spent to split the network in the MNBC protocol, especially with a higher number of nodes.Keywords: Bitcoin network, propagation delay, clustering, scalability
Procedia PDF Downloads 1154021 Stock Price Prediction Using Time Series Algorithms
Authors: Sumit Sen, Sohan Khedekar, Umang Shinde, Shivam Bhargava
Abstract:
This study has been undertaken to investigate whether the deep learning models are able to predict the future stock prices by training the model with the historical stock price data. Since this work required time series analysis, various models are present today to perform time series analysis such as Recurrent Neural Network LSTM, ARIMA and Facebook Prophet. Applying these models the movement of stock price of stocks are predicted and also tried to provide the future prediction of the stock price of a stock. Final product will be a stock price prediction web application that is developed for providing the user the ease of analysis of the stocks and will also provide the predicted stock price for the next seven days.Keywords: Autoregressive Integrated Moving Average, Deep Learning, Long Short Term Memory, Time-series
Procedia PDF Downloads 1414020 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 804019 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 3094018 Proposal of Data Collection from Probes
Authors: M. Kebisek, L. Spendla, M. Kopcek, T. Skulavik
Abstract:
In our paper we describe the security capabilities of data collection. Data are collected with probes located in the near and distant surroundings of the company. Considering the numerous obstacles e.g. forests, hills, urban areas, the data collection is realized in several ways. The collection of data uses connection via wireless communication, LAN network, GSM network and in certain areas data are collected by using vehicles. In order to ensure the connection to the server most of the probes have ability to communicate in several ways. Collected data are archived and subsequently used in supervisory applications. To ensure the collection of the required data, it is necessary to propose algorithms that will allow the probes to select suitable communication channel.Keywords: communication, computer network, data collection, probe
Procedia PDF Downloads 3604017 Terrain Classification for Ground Robots Based on Acoustic Features
Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow
Abstract:
The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.Keywords: acoustic features, autonomous robots, feature extraction, terrain classification
Procedia PDF Downloads 3694016 A Novel Solution Methodology for Transit Route Network Design Problem
Authors: Ghada Moussa, Mamoud Owais
Abstract:
Transit Route Network Design Problem (TrNDP) is the most important component in Transit planning, in which the overall cost of the public transportation system highly depends on it. The main purpose of this study is to develop a novel solution methodology for the TrNDP, which goes beyond pervious traditional sophisticated approaches. The novelty of the solution methodology, adopted in this paper, stands on the deterministic operators which are tackled to construct bus routes. The deterministic manner of the TrNDP solution relies on using linear and integer mathematical formulations that can be solved exactly with their standard solvers. The solution methodology has been tested through Mandl’s benchmark network problem. The test results showed that the methodology developed in this research is able to improve the given network solution in terms of number of constructed routes, direct transit service coverage, transfer directness and solution reliability. Although the set of routes resulted from the methodology would stand alone as a final efficient solution for TrNDP, it could be used as an initial solution for meta-heuristic procedures to approach global optimal. Based on the presented methodology, a more robust network optimization tool would be produced for public transportation planning purposes.Keywords: integer programming, transit route design, transportation, urban planning
Procedia PDF Downloads 2734015 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment
Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM
Procedia PDF Downloads 1164014 Genetic Algorithm Based Node Fault Detection and Recovery in Distributed Sensor Networks
Authors: N. Nalini, Lokesh B. Bhajantri
Abstract:
In Distributed Sensor Networks, the sensor nodes are prone to failure due to energy depletion and some other reasons. In this regard, fault tolerance of network is essential in distributed sensor environment. Energy efficiency, network or topology control and fault-tolerance are the most important issues in the development of next-generation Distributed Sensor Networks (DSNs). This paper proposes a node fault detection and recovery using Genetic Algorithm (GA) in DSN when some of the sensor nodes are faulty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using GA, which is used to detect the faulty nodes in the network based on the energy depletion of node and link failure between nodes. The proposed fault detection model is used to detect faults at node level and network level faults (link failure and packet error). Finally, the performance parameters for the proposed scheme are evaluated.Keywords: distributed sensor networks, genetic algorithm, fault detection and recovery, information technology
Procedia PDF Downloads 4524013 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost
Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku
Abstract:
Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost
Procedia PDF Downloads 1114012 Advancing Power Network Maintenance: The Development and Implementation of a Robotic Cable Splicing Machine
Authors: Ali Asmari, Alex Symington, Htaik Than, Austin Caradonna, John Senft
Abstract:
This paper presents the collaborative effort between ULC Technologies and Con Edison in developing a groundbreaking robotic cable splicing machine. The focus is on the machine's design, which integrates advanced robotics and automation to enhance safety and efficiency in power network maintenance. The paper details the operational steps of the machine, including cable grounding, cutting, and removal of different insulation layers, and discusses its novel technological approach. The significant benefits over traditional methods, such as improved worker safety and reduced outage times, are highlighted based on the field data collected during the validation phase of the project. The paper also explores the future potential and scalability of this technology, emphasizing its role in transforming the landscape of power network maintenance.Keywords: cable splicing machine, power network maintenance, electric distribution, electric transmission, medium voltage cable
Procedia PDF Downloads 664011 Simulation of Human Heart Activation Based on Diffusion Tensor Imaging
Authors: Ihab Elaff
Abstract:
Simulating the heart’s electrical stimulation is essential in modeling and evaluating the electrophysiology behavior of the heart. For achieving that, there are two structures in concern: the ventricles’ Myocardium, and the ventricles’ Conduction Network. Ventricles’ Myocardium has been modeled as anisotropic material from Diffusion Tensor Imaging (DTI) scan, and the Conduction Network has been extracted from DTI as a case-based structure based on the biological properties of the heart tissues and the working methodology of the Magnetic Resonance Imaging (MRI) scanner. Results of the produced activation were much similar to real measurements of the reference model that was presented in the literature.Keywords: diffusion tensor, DTI, heart, conduction network, excitation propagation
Procedia PDF Downloads 2664010 Voltage Sag Characteristics during Symmetrical and Asymmetrical Faults
Authors: Ioannis Binas, Marios Moschakis
Abstract:
Electrical faults in transmission and distribution networks can have great impact on the electrical equipment used. Fault effects depend on the characteristics of the fault as well as the network itself. It is important to anticipate the network’s behavior during faults when planning a new equipment installation, as well as troubleshooting. Moreover, working backwards, we could be able to estimate the characteristics of the fault when checking the perceived effects. Different transformer winding connections dominantly used in the Greek power transfer and distribution networks and the effects of 1-phase to neutral, phase-to-phase, 2-phases to neutral and 3-phase faults on different locations of the network were simulated in order to present voltage sag characteristics. The study was performed on a generic network with three steps down transformers on two voltage level buses (one 150 kV/20 kV transformer and two 20 kV/0.4 kV). We found that during faults, there are significant changes both on voltage magnitudes and on phase angles. The simulations and short-circuit analysis were performed using the PSCAD simulation package. This paper presents voltage characteristics calculated for the simulated network, with different approaches on the transformer winding connections during symmetrical and asymmetrical faults on various locations.Keywords: Phase angle shift, power quality, transformer winding connections, voltage sag propagation
Procedia PDF Downloads 1394009 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks
Authors: Sami Baraketi, Jean Marie Garcia, Olivier Brun
Abstract:
Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods.Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic
Procedia PDF Downloads 5274008 Social Economical Aspect of the City of Kigali Road Network Functionality
Authors: David Nkurunziza, Rahman Tafahomi
Abstract:
The population growth rate of the city of Kigali is increasing annually. In 1960 the population was six thousand, in 1990 it became two hundred thousand and is supposed to be 4 to 5 million incoming twenty years. With the increase in the residents living in the city of Kigali, there is also a need for an increase in social and economic infrastructures connected by the road networks to serve the residents effectively. A road network is a route that connects people to their needs and has to facilitate people to reach the social and economic facilities easily. This research analyzed the social and economic aspects of three selected roads networks passing through all three districts of the city of Kigali, whose center is the city center roundabout, thorough evaluation of the proximity of the social and economic facilities to the road network. These road networks are the city center to nyabugogo to karuruma, city center to kanogo to Rwanda to kicukiro center to Nyanza taxi park, and city center to Yamaha to kinamba to gakinjiro to kagugu health center road network. This research used a methodology of identifying and quantifying the social and economic facilities within a limited distance of 300 meters along each side of the road networks. Social facilities evaluated are the health facilities, education facilities, institution facilities, and worship facilities, while the economic facilities accessed are the commercial zones, industries, banks, and hotels. These facilities were evaluated and graded based on their distance from the road and their value. The total scores of each road network per kilometer were calculated and finally, the road networks were ranked based on their percentage score per one kilometer—this research was based on field surveys and interviews to collect data with forms and questionnaires. The analysis of the data collected declared that the road network from the city center to Yamaha to kinamba to gakinjiro to the kagugu health center is the best performer, the second is the road network from the city center to nyabugogo to karuruma, while the third is the road network from the city center to kanogo to rwandex to kicukiro center to nyaza taxi park.Keywords: social economical aspect, road network functionality, urban road network, economic and social facilities
Procedia PDF Downloads 160