Search results for: graph convolutional networks (GCNs)
1955 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images
Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam
Abstract:
The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy
Procedia PDF Downloads 791954 Signal Strength Based Multipath Routing for Mobile Ad Hoc Networks
Authors: Chothmal
Abstract:
In this paper, we present a route discovery process which uses the signal strength on a link as a parameter of its inclusion in the route discovery method. The proposed signal-to-interference and noise ratio (SINR) based multipath reactive routing protocol is named as SINR-MP protocol. The proposed SINR-MP routing protocols has two following two features: a) SINR-MP protocol selects routes based on the SINR of the links during the route discovery process therefore it select the routes which has long lifetime and low frame error rate for data transmission, and b) SINR-MP protocols route discovery process is multipath which discovers more than one SINR based route between a given source destination pair. The multiple routes selected by our SINR-MP protocol are node-disjoint in nature which increases their robustness against link failures, as failure of one route will not affect the other route. The secondary route is very useful in situations where the primary route is broken because we can now use the secondary route without causing a new route discovery process. Due to this, the network overhead caused by a route discovery process is avoided. This increases the network performance greatly. The proposed SINR-MP routing protocol is implemented in the trail version of network simulator called Qualnet.Keywords: ad hoc networks, quality of service, video streaming, H.264/SVC, multiple routes, video traces
Procedia PDF Downloads 2491953 A Framework for the Design of Green Giga Passive Optical Fiber Access Network in Kuwait
Authors: Ali A. Hammadi
Abstract:
In this work, a practical study on a commissioned Giga Passive Optical Network (GPON) fiber to the home access network in Kuwait is presented. The work covers the framework of the conceptual design of the deployed Passive Optical Networks (PONs), access network, optical fiber cable network distribution, technologies, and standards. The work also describes methodologies applied by system engineers for design of Optical Network Terminals (ONTs) and Optical Line Terminals (OLTs) transceivers with respect to the distance, operating wavelengths, splitting ratios. The results have demonstrated and justified the limitation of transmission distance of a PON link in Fiber to The Premises (FTTP) to not exceed 20 km. Optical Time Domain Reflector (OTDR) test has been carried for this project to confirm compliance with International Telecommunication Union (ITU) specifications regarding the total length of the deployed optical cable, total loss in dB, and loss per km in dB/km with respect to the operating wavelengths. OTDR test results with traces for segments of implemented fiber network will be provided and discussed.Keywords: passive optical networks (PONs), fiber to the premises (FTTx), access network, OTDR
Procedia PDF Downloads 2881952 Applying Neural Networks for Solving Record Linkage Problem via Fuzzy Description Logics
Authors: Mikheil Kalmakhelidze
Abstract:
Record linkage (RL) problem has become more and more important in recent years due to the growing interest towards big data analysis. The problem can be formulated in a very simple way: Given two entries a and b of a database, decide whether they represent the same object or not. There are two classical deterministic and probabilistic ways of solving the RL problem. Using simple Bayes classifier in many cases produces useful results but sometimes they show to be poor. In recent years several successful approaches have been made towards solving specific RL problems by neural network algorithms including single layer perception, multilayer back propagation network etc. In our work, we model the RL problem for specific dataset of student applications in fuzzy description logic (FDL) where linkage of specific pair (a,b) depends on the truth value of corresponding formula A(a,b) in a canonical FDL model. As a main result, we build neural network for deciding truth value of FDL formulas in a canonical model and thus link RL problem to machine learning. We apply the approach to dataset with 10000 entries and also compare to classical RL solving approaches. The results show to be more accurate than standard probabilistic approach.Keywords: description logic, fuzzy logic, neural networks, record linkage
Procedia PDF Downloads 2721951 An Optimization Model for Maximum Clique Problem Based on Semidefinite Programming
Authors: Derkaoui Orkia, Lehireche Ahmed
Abstract:
The topic of this article is to exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for solving NP-hard problems. This approach provides tight relaxations of combinatorial and quadratic problems. In this work, we solve the maximum clique problem using this relaxation. The clique problem is the computational problem of finding cliques in a graph. It is widely acknowledged for its many applications in real-world problems. The numerical results show that it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. We implement a primal-dual interior points algorithm to solve this problem based on semidefinite programming. The semidefinite relaxation of this problem can be solved in polynomial time.Keywords: semidefinite programming, maximum clique problem, primal-dual interior point method, relaxation
Procedia PDF Downloads 2221950 Post-Quantum Resistant Edge Authentication in Large Scale Industrial Internet of Things Environments Using Aggregated Local Knowledge and Consistent Triangulation
Authors: C. P. Autry, A. W. Roscoe, Mykhailo Magal
Abstract:
We discuss the theoretical model underlying 2BPA (two-band peer authentication), a practical alternative to conventional authentication of entities and data in IoT. In essence, this involves assembling a virtual map of authentication assets in the network, typically leading to many paths of confirmation between any pair of entities. This map is continuously updated, confirmed, and evaluated. The value of authentication along multiple disjoint paths becomes very clear, and we require analogues of triangulation to extend authentication along extended paths and deliver it along all possible paths. We discover that if an attacker wants to make an honest node falsely believe she has authenticated another, then the length of the authentication paths is of little importance. This is because optimal attack strategies correspond to minimal cuts in the authentication graph and do not contain multiple edges on the same path. The authentication provided by disjoint paths normally is additive (in entropy).Keywords: authentication, edge computing, industrial IoT, post-quantum resistance
Procedia PDF Downloads 1971949 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 3841948 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 1291947 Measuring Delay Using Software Defined Networks: Limitations, Challenges, and Suggestions for Openflow
Authors: Ahmed Alutaibi, Ganti Sudhakar
Abstract:
Providing better Quality-of-Service (QoS) to end users has been a challenging problem for researchers and service providers. Building applications relying on best effort network protocols hindered the adoption of guaranteed service parameters and, ultimately, Quality of Service. The introduction of Software Defined Networking (SDN) opened the door for a new paradigm shift towards a more controlled programmable configurable behavior. Openflow has been and still is the main implementation of the SDN vision. To facilitate better QoS for applications, the network must calculate and measure certain parameters. One of those parameters is the delay between the two ends of the connection. Using the power of SDN and the knowledge of application and network behavior, SDN networks can adjust to different conditions and specifications. In this paper, we use the capabilities of SDN to implement multiple algorithms to measure delay end-to-end not only inside the SDN network. The results of applying the algorithms on an emulated environment show that we can get measurements close to the emulated delay. The results also show that depending on the algorithm, load on the network and controller can differ. In addition, the transport layer handshake algorithm performs best among the tested algorithms. Out of the results and implementation, we show the limitations of Openflow and develop suggestions to solve them.Keywords: software defined networking, quality of service, delay measurement, openflow, mininet
Procedia PDF Downloads 1651946 The 5G Communication Technology Radiation Impact on Human Health and Airports Safety
Authors: Ashraf Aly
Abstract:
The aim of this study is to examine the impact of 5G communication technology radiation on human health and airport safety. The term 5G refers to the fifth generation of wireless mobile technology. The 5G wireless technology will increase the number of high-frequency-powered base stations and other devices and browsing and download speeds, as well as improve the network connectivity and play a big part in improving the performance of integrated applications, such as self-driving cars, medical devices, and robotics. 4G was the latest embedded version of mobile networking technology called 4G, and 5G is the new version of wireless technology. 5G networks have more features than 4G networks, such as lower latency, higher capacity, and increased bandwidth compared to 4G. 5G network improvements over 4G will have big impacts on how people live, business, and work all over the world. But neither 4G nor 5G have been tested for safety and show harmful effects from this wireless radiation. This paper presents biological factors on the effects of 5G radiation on human health. 5G services use C-band radio frequencies; these frequencies are close to those used by radio altimeters, which represent important equipment for airport and aircraft safety. The aviation industry, telecommunications companies, and their regulators have been discussing and weighing these interference concerns for years.Keywords: wireless communication, radiofrequency, Electromagnetic field, environmental issues
Procedia PDF Downloads 651945 A Survey and Theory of the Effects of Various Hamlet Videos on Viewers’ Brains
Authors: Mark Pizzato
Abstract:
How do ideas, images, and emotions in stage-plays and videos affect us? Do they evoke a greater awareness (or cognitive reappraisal of emotions) through possible shifts between left-cortical, right-cortical, and subcortical networks? To address these questions, this presentation summarizes the research of various neuroscientists, especially Bernard Baars and others involved in Global Workspace Theory, Matthew Lieberman in social neuroscience, Iain McGilchrist on left and right cortical functions, and Jaak Panksepp on the subcortical circuits of primal emotions. Through such research, this presentation offers an ‘inner theatre’ model of the brain, regarding major hubs of neural networks and our animal ancestry. It also considers recent experiments, by Mario Beauregard, on the cognitive reappraisal of sad, erotic, and aversive film clips. Finally, it applies the inner-theatre model and related research to survey results of theatre students who read and then watched the ‘To be or not to be’ speech in 8 different video versions (from stage and screen productions) of William Shakespeare’s Hamlet. Findings show that students become aware of left-cortical, right-cortical, and subcortical brain functions—and shifts between them—through staging and movie-making choices in each of the different videos.Keywords: cognitive reappraisal, Hamlet, neuroscience, Shakespeare, theatre
Procedia PDF Downloads 3151944 Keypoint Detection Method Based on Multi-Scale Feature Fusion of Attention Mechanism
Authors: Xiaoxiao Li, Shuangcheng Jia, Qian Li
Abstract:
Keypoint detection has always been a challenge in the field of image recognition. This paper proposes a novelty keypoint detection method which is called Multi-Scale Feature Fusion Convolutional Network with Attention (MFFCNA). We verified that the multi-scale features with the attention mechanism module have better feature expression capability. The feature fusion between different scales makes the information that the network model can express more abundant, and the network is easier to converge. On our self-made street sign corner dataset, we validate the MFFCNA model with an accuracy of 97.8% and a recall of 81%, which are 5 and 8 percentage points higher than the HRNet network, respectively. On the COCO dataset, the AP is 71.9%, and the AR is 75.3%, which are 3 points and 2 points higher than HRNet, respectively. Extensive experiments show that our method has a remarkable improvement in the keypoint recognition tasks, and the recognition effect is better than the existing methods. Moreover, our method can be applied not only to keypoint detection but also to image classification and semantic segmentation with good generality.Keywords: keypoint detection, feature fusion, attention, semantic segmentation
Procedia PDF Downloads 1191943 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem
Procedia PDF Downloads 2331942 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 671941 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi
Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault
Abstract:
Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.Keywords: deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering
Procedia PDF Downloads 4791940 Real Time Traffic Performance Study over MPLS VPNs with DiffServ
Authors: Naveed Ghani
Abstract:
With the arrival of higher speed communication links and mature application running over the internet, the requirement for reliable, efficient and robust network designs rising day by day. Multi-Protocol Label Switching technology (MPLS) Virtual Private Networks (VPNs) have committed to provide optimal network services. They are gaining popularity in industry day by day. Enterprise customers are moving to service providers that offer MPLS VPNs. The main reason for this shifting is the capability of MPLS VPN to provide built in security features and any-to-any connectivity. MPLS VPNs improved the network performance due to fast label switching as compare to traditional IP Forwarding but traffic classification and policing was still required on per hop basis to enhance the performance of real time traffic which is delay sensitive (particularly voice and video). QoS (Quality of service) is the most important factor to prioritize enterprise networks’ real time traffic such as voice and video. This thesis is focused on the study of QoS parameters (e.g. delay, jitter and MOS (Mean Opinion Score)) for the real time traffic over MPLS VPNs. DiffServ (Differentiated Services) QoS model will be used over MPLS VPN network to get end-to-end service quality.Keywords: network, MPLS, VPN, DiffServ, MPLS VPN, DiffServ QoS, QoS Model, GNS2
Procedia PDF Downloads 4261939 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 1571938 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 1641937 Expanding Trading Strategies By Studying Sentiment Correlation With Data Mining Techniques
Authors: Ved Kulkarni, Karthik Kini
Abstract:
This experiment aims to understand how the media affects the power markets in the mainland United States and study the duration of reaction time between news updates and actual price movements. it have taken into account electric utility companies trading in the NYSE and excluded companies that are more politically involved and move with higher sensitivity to Politics. The scrapper checks for any news related to keywords, which are predefined and stored for each specific company. Based on this, the classifier will allocate the effect into five categories: positive, negative, highly optimistic, highly negative, or neutral. The effect on the respective price movement will be studied to understand the response time. Based on the response time observed, neural networks would be trained to understand and react to changing market conditions, achieving the best strategy in every market. The stock trader would be day trading in the first phase and making option strategy predictions based on the black holes model. The expected result is to create an AI-based system that adjusts trading strategies within the market response time to each price movement.Keywords: data mining, language processing, artificial neural networks, sentiment analysis
Procedia PDF Downloads 171936 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 901935 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks
Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh
Abstract:
In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.Keywords: aggregation, estimation, queuing, wireless sensor network
Procedia PDF Downloads 1861934 Seismic Hazard Prediction Using Seismic Bumps: Artificial Neural Network Technique
Authors: Belkacem Selma, Boumediene Selma, Tourkia Guerzou, Abbes Labdelli
Abstract:
Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. The Earthquakes prediction to prevent the loss of human lives and even property damage is an important factor; that is why it is crucial to develop techniques for predicting this natural disaster. This present study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 10^4J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines has been analyzed. The results obtained show that the ANN with high accuracy was able to predict earthquake parameters; the classification accuracy through neural networks is more than 94%, and that the models developed are efficient and robust and depend only weakly on the initial database.Keywords: earthquake prediction, ANN, seismic bumps
Procedia PDF Downloads 1271933 Urban Ethical Fashion Networks of Design, Production and Retail in Taiwan
Authors: WenYing Claire Shih, Konstantinos Agrafiotis
Abstract:
The circular economy has become one of the seven fundamental pillars of Taiwan’s economic development, as this is promulgated by the government. The model of the circular economy, with its fundamental premise of waste elimination, can transform the textile and clothing sectors from major pollutant industries to a much cleaner alternative for a better quality of all citizens’ lives. In a related vein, the notion of the creative economy and more specifically the fashion industry can prompt similar results in terms of jobs and wealth creation. The combining forces of the circular and creative economies and their beneficial output have resulted in the configuration of ethical urban networks which potentially may lead to sources of competitive advantage. All actors involved in the configuration of this urban ethical fashion network from public authorities to private enterprise can bring about positive changes in the urban setting. Preliminary results through action research show that this configuration is an attainable task in terms of circularity by reducing fabric waste produced from local textile mills and through innovative methods of design, production and retail around urban spaces where the network has managed to generate a stream of jobs and financial revenues for all participants. The municipal authorities as the facilitating platform have been of paramount importance in this public-private partnership. In the explorative pilot study conducted about a network of production, consumption in terms of circularity of fashion products, we have experienced a positive disposition. As the network will be fully functional by attracting more participant firms from the textile and clothing sectors, it can be beneficial to Taiwan’s soft power in the region and simultaneously elevate citizens’ awareness on circular methods of fashion production, consumption and disposal which can also lead to the betterment of urban lifestyle and may open export horizons for the firms.Keywords: the circular economy, the creative economy, ethical urban networks, action research
Procedia PDF Downloads 1361932 Using Satellite Images Datasets for Road Intersection Detection in Route Planning
Authors: Fatma El-Zahraa El-Taher, Ayman Taha, Jane Courtney, Susan Mckeever
Abstract:
Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions, is critical to decisions such as crossing roads or selecting the safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer the state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset is examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of the detection of intersections in satellite images is evaluated.Keywords: satellite images, remote sensing images, data acquisition, autonomous vehicles
Procedia PDF Downloads 1441931 The Potential Threat of Cyberterrorism to the National Security: Theoretical Framework
Authors: Abdulrahman S. Alqahtani
Abstract:
The revolution of computing and networks could revolutionise terrorism in the same way that it has brought about changes in other aspects of life. The modern technological era has faced countries with a new set of security challenges. There are many states and potential adversaries who have the potential and capacity in cyberspace, which makes them able to carry out cyber-attacks in the future. Some of them are currently conducting surveillance, gathering and analysis of technical information, and mapping of networks and nodes and infrastructure of opponents, which may be exploited in future conflicts. This poster presents the results of the quantitative study (survey) to test the validity of the proposed theoretical framework for the cyber terrorist threats. This theoretical framework will help to in-depth understand these new digital terrorist threats. It may also be a practical guide for managers and technicians in critical infrastructure, to understand and assess the threats they face. It might also be the foundation for building a national strategy to counter cyberterrorism. In the beginning, it provides basic information about the data. To purify the data, reliability and exploratory factor analysis, as well as confirmatory factor analysis (CFA) were performed. Then, Structural Equation Modelling (SEM) was utilised to test the final model of the theory and to assess the overall goodness-of-fit between the proposed model and the collected data set.Keywords: cyberterrorism, critical infrastructure, , national security, theoretical framework, terrorism
Procedia PDF Downloads 4051930 Identifying Strategies for Improving Railway Services in Bangladesh
Authors: Armana Sabiha Huq, Tahmina Rahman Chowdhury
Abstract:
In this paper, based on the stated preference experiment, the service quality of Bangladesh Railway has been assessed, and particular importance has been given to investigate if there exists a relationship between service quality and safety. For investigation purposes, environmental and organizational factors were assumed to determine the safety performance of the railway. Data collected from the survey has been analyzed by importance-performance analysis (IPA). In this paper, a modification of the well-known importance-performance analysis (IPA) has been done by adopting the importance of the weights determined through a structural equation modeling (SEM) approach and by plotting the gap between importance and performance on a visual graph. It has been found that there exists a relationship between safety and serviceability to some extent. Limited resources are an important factor to improve the safety and serviceability condition of the BD railway. Moreover, it is observed that the limited resources available to monitor and improve the safety performance of railway.Keywords: importance-performance analysis, GAP-IPA, SEM, serviceability, safety, factor analysis
Procedia PDF Downloads 1401929 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature
Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon
Abstract:
Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.Keywords: deep-learning, altimetry, sea surface temperature, forecast
Procedia PDF Downloads 901928 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations
Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha
Abstract:
This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation
Procedia PDF Downloads 1421927 Analyzing Industry-University Collaboration Using Complex Networks and Game Theory
Authors: Elnaz Kanani-Kuchesfehani, Andrea Schiffauerova
Abstract:
Due to the novelty of the nanotechnology science, its highly knowledge intensive content, and its invaluable application in almost all technological fields, the close interaction between university and industry is essential. A possible gap between academic strengths to generate good nanotechnology ideas and industrial capacity to receive them can thus have far-reaching consequences. In order to be able to enhance the collaboration between the two parties, a better understanding of knowledge transfer within the university-industry relationship is needed. The objective of this research is to investigate the research collaboration between academia and industry in Canadian nanotechnology and to propose the best cooperative strategy to maximize the quality of the produced knowledge. First, a network of all Canadian academic and industrial nanotechnology inventors is constructed using the patent data from the USPTO (United States Patent and Trademark Office), and it is analyzed with social network analysis software. The actual level of university-industry collaboration in Canadian nanotechnology is determined and the significance of each group of actors in the network (academic vs. industrial inventors) is assessed. Second, a novel methodology is proposed, in which the network of nanotechnology inventors is assessed from a game theoretic perspective. It involves studying a cooperative game with n players each having at most n-1 decisions to choose from. The equilibrium leads to a strategy for all the players to choose their co-worker in the next period in order to maximize the correlated payoff of the game. The payoffs of the game represent the quality of the produced knowledge based on the citations of the patents. The best suggestion for the next collaborative relationship is provided for each actor from a game theoretic point of view in order to maximize the quality of the produced knowledge. One of the major contributions of this work is the novel approach which combines game theory and social network analysis for the case of large networks. This approach can serve as a powerful tool in the analysis of the strategic interactions of the network actors within the innovation systems and other large scale networks.Keywords: cooperative strategy, game theory, industry-university collaboration, knowledge production, social network analysis
Procedia PDF Downloads 2581926 Analysis of Brain Signals Using Neural Networks Optimized by Co-Evolution Algorithms
Authors: Zahra Abdolkarimi, Naser Zourikalatehsamad,
Abstract:
Up to 40 years ago, after recognition of epilepsy, it was generally believed that these attacks occurred randomly and suddenly. However, thanks to the advance of mathematics and engineering, such attacks can be predicted within a few minutes or hours. In this way, various algorithms for long-term prediction of the time and frequency of the first attack are presented. In this paper, by considering the nonlinear nature of brain signals and dynamic recorded brain signals, ANFIS model is presented to predict the brain signals, since according to physiologic structure of the onset of attacks, more complex neural structures can better model the signal during attacks. Contribution of this work is the co-evolution algorithm for optimization of ANFIS network parameters. Our objective is to predict brain signals based on time series obtained from brain signals of the people suffering from epilepsy using ANFIS. Results reveal that compared to other methods, this method has less sensitivity to uncertainties such as presence of noise and interruption in recorded signals of the brain as well as more accuracy. Long-term prediction capacity of the model illustrates the usage of planted systems for warning medication and preventing brain signals.Keywords: co-evolution algorithms, brain signals, time series, neural networks, ANFIS model, physiologic structure, time prediction, epilepsy suffering, illustrates model
Procedia PDF Downloads 282