Search results for: deep Q networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4416

Search results for: deep Q networks

3546 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 122
3545 The Making of a Community: Perception versus Reality of Neighborhood Resources

Authors: Kirstie Smith

Abstract:

This paper elucidates the value of neighborhood perception as it contributes to the advancement of well-being for individuals and families within a neighborhood. Through in-depth interviews with city residents, this paper examines the degree to which key stakeholders’ (residents) evaluate their neighborhood and perception of resources and identify, access, and utilize local assets existing in the community. Additionally, the research objective included conducting a community inventory that qualified the community assets and resources of lower-income neighborhoods of a medium-sized industrial city. Analysis of the community’s assets was compared with the interview results to allow for a better understanding of the community’s condition. Community mapping revealed the key informants’ reflections of assets were somewhat validated. In each neighborhood, there were more assets mapped than reported in the interviews. Another chief supposition drawn from this study was the identification of key development partners and social networks that offer the potential to facilitate locally-driven community development. Overall, the participants provided invaluable local knowledge of the perception of neighborhood assets, the well-being of residents, the condition of the community, and suggestions for responding to the challenges of the entire community in order to mobilize the present assets and networks.

Keywords: community mapping, family, resource allocation, social networks

Procedia PDF Downloads 332
3544 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 140
3543 The Face Sync-Smart Attendance

Authors: Bekkem Chakradhar Reddy, Y. Soni Priya, Mathivanan G., L. K. Joshila Grace, N. Srinivasan, Asha P.

Abstract:

Currently, there are a lot of problems related to marking attendance in schools, offices, or other places. Organizations tasked with collecting daily attendance data have numerous concerns. There are different ways to mark attendance. The most commonly used method is collecting data manually by calling each student. It is a longer process and problematic. Now, there are a lot of new technologies that help to mark attendance automatically. It reduces work and records the data. We have proposed to implement attendance marking using the latest technologies. We have implemented a system based on face identification and analyzing faces. The project is developed by gathering faces and analyzing data, using deep learning algorithms to recognize faces effectively. The data is recorded and forwarded to the host through mail. The project was implemented in Python and Python libraries used are CV2, Face Recognition, and Smtplib.

Keywords: python, deep learning, face recognition, CV2, smtplib, Dlib.

Procedia PDF Downloads 38
3542 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 139
3541 Minimizing Fresh and Wastewater Using Water Pinch Technique in Petrochemical Industries

Authors: Wasif Mughees, Malik Al-Ahmad, Muhammad Naeem

Abstract:

This research involves the design and analysis of pinch-based water/wastewater networks to minimize water utility in the petrochemical and petroleum industries. A study has been done on Tehran Oil Refinery to analyze feasibilities of regeneration, reuse and recycling of water network. COD is considered as a single key contaminant. Amount of freshwater was reduced about 149m3/h (43.8%) regarding COD. Re-design (or retrofitting) of water allocation in the networks was undertaken. The results were analyzed through graphical method and mathematical programming technique which clearly demonstrated that amount of required water would be determined by mass transfer of COD.

Keywords: minimization, water pinch, water management, pollution prevention

Procedia PDF Downloads 435
3540 Automating 2D CAD to 3D Model Generation Process: Wall pop-ups

Authors: Mohit Gupta, Chialing Wei, Thomas Czerniawski

Abstract:

In this paper, we have built a neural network that can detect walls on 2D sheets and subsequently create a 3D model in Revit using Dynamo. The training set includes 3500 labeled images, and the detection algorithm used is YOLO. Typically, engineers/designers make concentrated efforts to convert 2D cad drawings to 3D models. This costs a considerable amount of time and human effort. This paper makes a contribution in automating the task of 3D walls modeling. 1. Detecting Walls in 2D cad and generating 3D pop-ups in Revit. 2. Saving designer his/her modeling time in drafting elements like walls from 2D cad to 3D representation. An object detection algorithm YOLO is used for wall detection and localization. The neural network is trained over 3500 labeled images of size 256x256x3. Then, Dynamo is interfaced with the output of the neural network to pop-up 3D walls in Revit. The research uses modern technological tools like deep learning and artificial intelligence to automate the process of generating 3D walls without needing humans to manually model them. Thus, contributes to saving time, human effort, and money.

Keywords: neural networks, Yolo, 2D to 3D transformation, CAD object detection

Procedia PDF Downloads 132
3539 DeepOmics: Deep Learning for Understanding Genome Functioning and the Underlying Genetic Causes of Disease

Authors: Vishnu Pratap Singh Kirar, Madhuri Saxena

Abstract:

Advancement in sequence data generation technologies is churning out voluminous omics data and posing a massive challenge to annotate the biological functional features. With so much data available, the use of machine learning methods and tools to make novel inferences has become obvious. Machine learning methods have been successfully applied to a lot of disciplines, including computational biology and bioinformatics. Researchers in computational biology are interested to develop novel machine learning frameworks to classify the huge amounts of biological data. In this proposal, it plan to employ novel machine learning approaches to aid the understanding of how apparently innocuous mutations (in intergenic DNA and at synonymous sites) cause diseases. We are also interested in discovering novel functional sites in the genome and mutations in which can affect a phenotype of interest.

Keywords: genome wide association studies (GWAS), next generation sequencing (NGS), deep learning, omics

Procedia PDF Downloads 81
3538 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

Authors: Abdelhadi Lotfi, Abdelkader Benyettou

Abstract:

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Keywords: classification, probabilistic neural networks, network optimization, pattern recognition

Procedia PDF Downloads 248
3537 The Influence of Strategic Networks and Logistics Integration on Company Performance among Small and Medium Enterprises

Authors: Jeremiah Madzimure

Abstract:

In order to stay competitive in business and improve performance, Small and Medium Enterprises (SMEs) need to make use of business networking and logistics integration. Strategic networking and logistics integration in business companies have become critical as they allow supplier partnering, exchange of vital information/ access to valuable resources allowing innovation, gaining access to additional resources, sharing risks and costs which is required for enhancing company performance. The purpose of this study was to examine the influence of strategic networks and logistics integration on company performance: the case of small and medium enterprises in South Africa. A quantitative research design was adopted in this study, and 137 SMEs owners and managers completed and returned the survey questionnaire. Confirmatory Factor Analysis (CFA) was conducted using the Analysis of Moment Structures (AMOS), version 24.0 to assess psychometric properties of the measurement scales. Path modelling techniques were used to test the proposed hypothesis. Three research hypotheses were postulated. The results indicate that strategic networks had a positive and significant influence on logistics integration and company performance. As well logistics integration had a strong positive and significant influence on company performance. This study provides a useful model for analysing the relationship between strategic networks and logistics integration on company performance. Moreover, the findings of the study provide useful insights into how SMEs should benefit from business networking and logistics integration so as to improve their performance. The implications of the study are discussed, and finally, limitations and recommendations are indicated.

Keywords: strategic networking, logistics integration, company performance, SMEs

Procedia PDF Downloads 282
3536 Slurry Erosion Behaviour of Cryotreated SS316L Impeller Steel Used for Irrigation Pumps

Authors: Jagtar Singh, Kulwinder Singh

Abstract:

Slurry erosion is a type of erosion wherein material is removed from the target surface due to impingement of solid particles entrained in liquid medium. Slurry erosion performance of deep cryogenic treatment on impeller steel SS 316 L has been investigated. Slurry collected from an actual irrigation pump used as the abrasive media in an erosion test rig. An attempt has been made to study the effect of velocity of fluid and impingement angle by constant concentration (ppm) on the slurry erosion behavior of these cryotreated steels under different experimental conditions. The slurry erosion wear analysis of cryotreated and untreated steels was done. The slurry erosion performance of cryotreated SS 316L impeller steel has been found to superior to that of untreated steel. Metallurgical investigation, hardness as well as %age of carbide in both types of steel was also investigated.

Keywords: deep cryogenic treatment, impeller, Irrigation pumps SS316L, slurry erosion

Procedia PDF Downloads 384
3535 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data

Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen

Abstract:

Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.

Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation

Procedia PDF Downloads 51
3534 Structure of Consciousness According to Deep Systemic Constellations

Authors: Dmitry Ustinov, Olga Lobareva

Abstract:

The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people.

Keywords: constellations, spiritual psychology, structure of consciousness, transpersonal psychology

Procedia PDF Downloads 236
3533 Ripple Effect Analysis of Government Investment for Research and Development by the Artificial Neural Networks

Authors: Hwayeon Song

Abstract:

The long-term purpose of research and development (R&D) programs is to strengthen national competitiveness by developing new knowledge and technologies. Thus, it is important to determine a proper budget for government programs to maintain the vigor of R&D when the total funding is tight due to the national deficit. In this regard, a ripple effect analysis for the budgetary changes in R&D programs is necessary as well as an investigation of the current status. This study proposes a new approach using Artificial Neural Networks (ANN) for both tasks. It particularly focuses on R&D programs related to Construction and Transportation (C&T) technology in Korea. First, key factors in C&T technology are explored to draw impact indicators in three areas: economy, society, and science and technology (S&T). Simultaneously, ANN is employed to evaluate the relationship between data variables. From this process, four major components in R&D including research personnel, expenses, management, and equipment are assessed. Then the ripple effect analysis is performed to see the changes in the hypothetical future by modifying current data. Any research findings can offer an alternative strategy about R&D programs as well as a new analysis tool.

Keywords: Artificial Neural Networks, construction and transportation technology, Government Research and Development, Ripple Effect

Procedia PDF Downloads 231
3532 Kinetic Study on Extracting Lignin from Black Liquor Using Deep Eutectic Solvents

Authors: Fatemeh Saadat Ghareh Bagh, Srimanta Ray, Jerald Lalman

Abstract:

Lignin, the largest inventory of organic carbon with a high caloric energy value is a major component in woody and non-woody biomass. In pulping mills, a large amount of the lignin is burned for energy. At the same time, the phenolic structure of lignin enables it to be converted to value-added compounds.This study has focused on extracting lignin from black liquor using deep eutectic solvents (DESs). Therefore, three choline chloride (ChCl)-DESs paired with lactic acid (LA) (1:11), oxalic acid.2H₂O (OX) (1:4), and malic acid (MA) (1:3) were synthesized at 90oC and atmospheric pressure. The kinetics of lignin recovery from black liquor using DES was investigated at three moderate temperatures (338, 353, and 368 K) at time intervals from 30 to 210 min. The extracted lignin (acid soluble lignin plus Klason lignin) was characterized by Fourier transform infrared spectroscopy (FTIR). The FTIR studies included comparing the extracted lignin with a model Kraft lignin. The extracted lignin was characterized spectrophotometrically to determine the acid soluble lignin (ASL) [TAPPI UM 250] fraction and Klason lignin was determined gravimetrically using TAPPI T 222 om02. The lignin extraction reaction using DESs was modeled by first-order reaction kinetics and the activation energy of the process was determined. The ChCl:LA-DES recovered lignin was 79.7±2.1% at 368K and a DES:BL ratio of 4:1 (v/v). The quantity of lignin extracted for the control solvent, [emim][OAc], was 77.5+2.2%. The activation energy measured for the LA-DES system was 22.7 KJ mol⁻¹, while the activation energy for the OX-DES and MA-DES systems were 7.16 KJ·mol⁻¹ and 8.66 KJ·mol⁻¹ when the total lignin recovery was 75.4 ±0.9% and 62.4 ±1.4, % respectively.

Keywords: black liquor, deep eutectic solvents, kinetics, lignin

Procedia PDF Downloads 132
3531 Effect of Different Parameters on the Swelling Behaviour of Thermo-Responsive Elastomers in a Nematogenic Solvent

Authors: Nouria Bouchikhi, Soufiane Bedjaoui, C. Tewfik Bouchaour, Lamia Alachaher Bedjaoui, Ulrich Maschke

Abstract:

Swelling properties and phase diagrams of binary systems composed of liquid crystalline networks and a low molecular mass liquid crystal (LMWLC) have been investigated. The networks were prepared by ultraviolet (UV) irradiation of reactive mixtures including a monomer, a cross-linking agent and a photo-initiator. These networks were prepared using two cross-linking agents: 1,6 hexanedioldiacrylate (HDDA) and a mesogenic acrylic acid 6-(4’-(6-acryloyloxy-hexyloxy) biphenyl-4-yl oxy) hexyl ester (AHBH). The obtained dry networks were characterized by differential scanning calorimetry, and immersed in an excess of a LMWLC solvent 4-cyano-4’-pentylbiphenyl (5CB), forming polymer gels. A detailed study by polarized optical microscopy allowed to determine the swelling degree of the gels and to follow the phase behavior of the solvent inside the polymer matrix in a wide range of temperature. It has been found that the gels undergo a sharp decrease of their swelling degree in response to an infinitesimal change of temperature. This finding adds new and interesting aspects on the actuators applications. We have subsequently explored the effect of different parameters on volume phase transition of these liquid crystalline materials. Such as the cross-linking density (CD), a nature of cross-linking agent and the photo initiator concentration.

Keywords: cross-linking density, liquid crystalline elastomers, phase diagrams, swelling

Procedia PDF Downloads 315
3530 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 125
3529 Short Term Distribution Load Forecasting Using Wavelet Transform and Artificial Neural Networks

Authors: S. Neelima, P. S. Subramanyam

Abstract:

The major tool for distribution planning is load forecasting, which is the anticipation of the load in advance. Artificial neural networks have found wide applications in load forecasting to obtain an efficient strategy for planning and management. In this paper, the application of neural networks to study the design of short term load forecasting (STLF) Systems was explored. Our work presents a pragmatic methodology for short term load forecasting (STLF) using proposed two-stage model of wavelet transform (WT) and artificial neural network (ANN). It is a two-stage prediction system which involves wavelet decomposition of input data at the first stage and the decomposed data with another input is trained using a separate neural network to forecast the load. The forecasted load is obtained by reconstruction of the decomposed data. The hybrid model has been trained and validated using load data from Telangana State Electricity Board.

Keywords: electrical distribution systems, wavelet transform (WT), short term load forecasting (STLF), artificial neural network (ANN)

Procedia PDF Downloads 420
3528 Improved Dynamic Bayesian Networks Applied to Arabic On Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology. This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data. Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables. In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization. The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, computer vision

Procedia PDF Downloads 416
3527 Intelligent Computing with Bayesian Regularization Artificial Neural Networks for a Nonlinear System of COVID-19 Epidemic Model for Future Generation Disease Control

Authors: Tahir Nawaz Cheema, Dumitru Baleanu, Ali Raza

Abstract:

In this research work, we design intelligent computing through Bayesian Regularization artificial neural networks (BRANNs) introduced to solve the mathematical modeling of infectious diseases (Covid-19). The dynamical transmission is due to the interaction of people and its mathematical representation based on the system's nonlinear differential equations. The generation of the dataset of the Covid-19 model is exploited by the power of the explicit Runge Kutta method for different countries of the world like India, Pakistan, Italy, and many more. The generated dataset is approximately used for training, testing, and validation processes for every frequent update in Bayesian Regularization backpropagation for numerical behavior of the dynamics of the Covid-19 model. The performance and effectiveness of designed methodology BRANNs are checked through mean squared error, error histograms, numerical solutions, absolute error, and regression analysis.

Keywords: mathematical models, beysian regularization, bayesian-regularization backpropagation networks, regression analysis, numerical computing

Procedia PDF Downloads 128
3526 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station

Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner

Abstract:

A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.

Keywords: radio base station, maintenance, classification, detection, deep learning, automation

Procedia PDF Downloads 183
3525 5G Future Hyper-Dense Networks: An Empirical Study and Standardization Challenges

Authors: W. Hashim, H. Burok, N. Ghazaly, H. Ahmad Nasir, N. Mohamad Anas, A. F. Ismail, K. L. Yau

Abstract:

Future communication networks require devices that are able to work on a single platform but support heterogeneous operations which lead to service diversity and functional flexibility. This paper proposes two cognitive mechanisms termed cognitive hybrid function which is applied in multiple broadband user terminals in order to maintain reliable connectivity and preventing unnecessary interferences. By employing such mechanisms especially for future hyper-dense network, we can observe their performances in terms of optimized speed and power saving efficiency. Results were obtained from several empirical laboratory studies. It was found that selecting reliable network had shown a better optimized speed performance up to 37% improvement as compared without such function. In terms of power adjustment, our evaluation of this mechanism can reduce the power to 5dB while maintaining the same level of throughput at higher power performance. We also discuss the issues impacting future telecommunication standards whenever such devices get in place.

Keywords: dense network, intelligent network selection, multiple networks, transmit power adjustment

Procedia PDF Downloads 366
3524 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 48
3523 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 398
3522 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 86
3521 Cooperative Sensing for Wireless Sensor Networks

Authors: Julien Romieux, Fabio Verdicchio

Abstract:

Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.

Keywords: cooperative signal processing, signal representation and approximation, power management, wireless sensor networks

Procedia PDF Downloads 378
3520 Neural Network Approach to Classifying Truck Traffic

Authors: Ren Moses

Abstract:

The process of classifying vehicles on a highway is hereby viewed as a pattern recognition problem in which connectionist techniques such as artificial neural networks (ANN) can be used to assign vehicles to their correct classes and hence to establish optimum axle spacing thresholds. In the United States, vehicles are typically classified into 13 classes using a methodology commonly referred to as “Scheme F”. In this research, the ANN model was developed, trained, and applied to field data of vehicles. The data comprised of three vehicular features—axle spacing, number of axles per vehicle, and overall vehicle weight. The ANN reduced the classification error rate from 9.5 percent to 6.2 percent when compared to an existing classification algorithm that is not ANN-based and which uses two vehicular features for classification, that is, axle spacing and number of axles. The inclusion of overall vehicle weight as a third classification variable further reduced the error rate from 6.2 percent to only 3.0 percent. The promising results from the neural networks were used to set up new thresholds that reduce classification error rate.

Keywords: artificial neural networks, vehicle classification, traffic flow, traffic analysis, and highway opera-tions

Procedia PDF Downloads 298
3519 Identification of Coauthors in Scientific Database

Authors: Thiago M. R Dias, Gray F. Moita

Abstract:

The analysis of scientific collaboration networks has contributed significantly to improving the understanding of how does the process of collaboration between researchers and also to understand how the evolution of scientific production of researchers or research groups occurs. However, the identification of collaborations in large scientific databases is not a trivial task given the high computational cost of the methods commonly used. This paper proposes a method for identifying collaboration in large data base of curriculum researchers. The proposed method has low computational cost with satisfactory results, proving to be an interesting alternative for the modeling and characterization of large scientific collaboration networks.

Keywords: extraction, data integration, information retrieval, scientific collaboration

Procedia PDF Downloads 380
3518 A Survey of Response Generation of Dialogue Systems

Authors: Yifan Fan, Xudong Luo, Pingping Lin

Abstract:

An essential task in the field of artificial intelligence is to allow computers to interact with people through natural language. Therefore, researches such as virtual assistants and dialogue systems have received widespread attention from industry and academia. The response generation plays a crucial role in dialogue systems, so to push forward the research on this topic, this paper surveys various methods for response generation. We sort out these methods into three categories. First one includes finite state machine methods, framework methods, and instance methods. The second contains full-text indexing methods, ontology methods, vast knowledge base method, and some other methods. The third covers retrieval methods and generative methods. We also discuss some hybrid methods based knowledge and deep learning. We compare their disadvantages and advantages and point out in which ways these studies can be improved further. Our discussion covers some studies published in leading conferences such as IJCAI and AAAI in recent years.

Keywords: deep learning, generative, knowledge, response generation, retrieval

Procedia PDF Downloads 123
3517 Deep Well-Grounded Magnetite Anode Chains Retrieval and Installation for Raslanuf Complex Impressed Current Cathodic Protection System Rectification

Authors: Mohamed Ahmed Khalil

Abstract:

The number of deep well anode ground beds (GBs) have been retrieved due to unoperated anode chains. New identical magnetite anode chains (MAC) have been installed at Raslanuf complex impressed current Cathodic protection (ICCP) system, distributed at different plants (Utility, ethylene and polyethylene). All problems associated with retrieving and installation of MACs have been discussed, rectified and presented. All GB-associated severely corroded wellhead casings were well maintained and/or replaced by new fabricated and modified ones. The main cause of the wellhead casing's severe internal corrosion was discussed and the conducted remedy action to overcome future corrosion problems is presented. All GB-connected anode junction boxes (AJBs) and shunts were closely inspected, maintained and necessary replacement and/or modifications were carried out on shunts. All damaged GB concrete foundations (CF) have been inspected and completely replaced. All GB-associated Transformer-Rectifiers Units (TRU) were subjected to thorough inspection and necessary maintenance was performed on each individual TRU. After completion of all MACs and TRU maintenance activities, each cathodic protection station (CPS) has been re-operated, alternative current (AC), direct current (DC), voltage and structure to soil potential (S/P) measurements have been conducted, recorded and all obtained test results are presented. DC current outputs have been adjusted and DC current outputs of each MAC have been recorded for each GB AJB.

Keywords: magnetite anodes, deep well, ground beds, cathodic protection, transformer rectifier, impressed current, junction boxes

Procedia PDF Downloads 104