Search results for: artificial neuron network
4572 Hyperspectral Band Selection for Oil Spill Detection Using Deep Neural Network
Authors: Asmau Mukhtar Ahmed, Olga Duran
Abstract:
Hydrocarbon (HC) spills constitute a significant problem that causes great concern to the environment. With the latest technology (hyperspectral images) and state of the earth techniques (image processing tools), hydrocarbon spills can easily be detected at an early stage to mitigate the effects caused by such menace. In this study; a controlled laboratory experiment was used, and clay soil was mixed and homogenized with different hydrocarbon types (diesel, bio-diesel, and petrol). The different mixtures were scanned with HYSPEX hyperspectral camera under constant illumination to generate the hypersectral datasets used for this experiment. So far, the Short Wave Infrared Region (SWIR) has been exploited in detecting HC spills with excellent accuracy. However, the Near-Infrared Region (NIR) is somewhat unexplored with regards to HC contamination and how it affects the spectrum of soils. In this study, Deep Neural Network (DNN) was applied to the controlled datasets to detect and quantify the amount of HC spills in soils in the Near-Infrared Region. The initial results are extremely encouraging because it indicates that the DNN was able to identify features of HC in the Near-Infrared Region with a good level of accuracy.Keywords: hydrocarbon, Deep Neural Network, short wave infrared region, near-infrared region, hyperspectral image
Procedia PDF Downloads 1134571 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 1204570 Computer-Aided Diagnosis of Polycystic Kidney Disease Using ANN
Authors: G. Anjan Babu, G. Sumana, M. Rajasekhar
Abstract:
Many inherited diseases and non-hereditary disorders are common in the development of renal cystic diseases. Polycystic kidney disease (PKD) is a disorder developed within the kidneys in which grouping of cysts filled with water like fluid. PKD is responsible for 5-10% of end-stage renal failure treated by dialysis or transplantation. New experimental models, application of molecular biology techniques have provided new insights into the pathogenesis of PKD. Researchers are showing keen interest for developing an automated system by applying computer aided techniques for the diagnosis of diseases. In this paper a multi-layered feed forward neural network with one hidden layer is constructed, trained and tested by applying back propagation learning rule for the diagnosis of PKD based on physical symptoms and test results of urinanalysis collected from the individual patients. The data collected from 50 patients are used to train and test the network. Among these samples, 75% of the data used for training and remaining 25% of the data are used for testing purpose. Furthermore, this trained network is used to implement for new samples. The output results in normality and abnormality of the patient.Keywords: dialysis, hereditary, transplantation, polycystic, pathogenesis
Procedia PDF Downloads 3804569 Effects of LED Lighting on Visual Comfort with Respect to the Reading Task
Authors: Ayşe Nihan Avcı, İpek Memikoğlu
Abstract:
Lighting systems in interior architecture need to be designed according to the function of the space, the type of task within the space, user comfort and needs. Desired and comfortable lighting levels increase task efficiency. When natural lighting is inadequate in a space, artificial lighting is additionally used to support the level of light. With the technological developments, the characteristics of light are being researched comprehensively and several business segments have focused on its qualitative and quantitative characteristics. These studies have increased awareness and usage of artificial lighting systems and researchers have investigated the effects of lighting on physical and psychological aspects of human in various ways. The aim of this study is to research the effects of illuminance levels of LED lighting on user visual comfort. Eighty participants from the Department of Interior Architecture of Çankaya University participated in three lighting scenarios consisting of 200 lux, 500 lux and 800 lux that are created with LED lighting. Each lighting scenario is evaluated according to six visual comfort criteria in which a reading task is performed. The results of the study indicated that LED lighting with three different illuminance levels affect visual comfort in different ways. The results are limited to the participants and questions that are attended and used in this study.Keywords: illuminance levels, LED lighting, reading task, visual comfort criteria
Procedia PDF Downloads 2554568 Early Prediction of Disposable Addresses in Ethereum Blockchain
Authors: Ahmad Saleem
Abstract:
Ethereum is the second largest crypto currency in blockchain ecosystem. Along with standard transactions, it supports smart contracts and NFT’s. Current research trends are focused on analyzing the overall structure of the network its growth and behavior. Ethereum addresses are anonymous and can be created on fly. The nature of Ethereum network and addresses make it hard to predict their behavior. The activity period of an ethereum address is not much analyzed. Using machine learning we can make early prediction about the disposability of the address. In this paper we analyzed the lifetime of the addresses. We also identified and predicted the disposable addresses using machine learning models and compared the results.Keywords: blockchain, Ethereum, cryptocurrency, prediction
Procedia PDF Downloads 974567 Performance Analysis of Scalable Secure Multicasting in Social Networking
Authors: R. Venkatesan, A. Sabari
Abstract:
Developments of social networking internet scenario are recommended for the requirements of scalable, authentic, secure group communication model like multicasting. Multicasting is an inter network service that offers efficient delivery of data from a source to multiple destinations. Even though multicast has been very successful at providing an efficient and best-effort data delivery service for huge groups, it verified complex process to expand other features to multicast in a scalable way. Separately, the requirement for secure electronic information had become gradually more apparent. Since multicast applications are deployed for mainstream purpose the need to secure multicast communications will become significant.Keywords: multicasting, scalability, security, social network
Procedia PDF Downloads 2924566 Solving the Wireless Mesh Network Design Problem Using Genetic Algorithm and Simulated Annealing Optimization Methods
Authors: Moheb R. Girgis, Tarek M. Mahmoud, Bahgat A. Abdullatif, Ahmed M. Rabie
Abstract:
Mesh clients, mesh routers and gateways are components of Wireless Mesh Network (WMN). In WMN, gateways connect to Internet using wireline links and supply Internet access services for users. We usually need multiple gateways, which takes time and costs a lot of money set up, due to the limited wireless channel bit rate. WMN is a highly developed technology that offers to end users a wireless broadband access. It offers a high degree of flexibility contrasted to conventional networks; however, this attribute comes at the expense of a more complex construction. Therefore, a challenge is the planning and optimization of WMNs. In this paper, we concentrate on this challenge using a genetic algorithm and simulated annealing. The genetic algorithm and simulated annealing enable searching for a low-cost WMN configuration with constraints and determine the number of used gateways. Experimental results proved that the performance of the genetic algorithm and simulated annealing in minimizing WMN network costs while satisfying quality of service. The proposed models are presented to significantly outperform the existing solutions.Keywords: wireless mesh networks, genetic algorithms, simulated annealing, topology design
Procedia PDF Downloads 4584565 Development of Electroencephalograph Collection System in Language-Learning Self-Study System That Can Detect Learning State of the Learner
Authors: Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa
Abstract:
This research aims to develop a self-study system equipped with an artificial teacher who gives advice to students by detecting the learners and to evaluate language learning in a unified framework. 'Detecting the learners' means that the system understands the learners' learning conditions, such as each learner’s degree of understanding, the difference in each learner’s thinking process, the degree of concentration or boredom in learning, and problem solving for each learner, which can be interpreted from learning behavior. In this paper, we propose a system to efficiently collect brain waves from learners by focusing on only the brain waves among the biological information for 'detecting the learners'. The conventional Electroencephalograph (EEG) measurement method during learning using a simple EEG has the following disadvantages. (1) The start and end of EEG measurement must be done manually by the experiment participant or staff. (2) Even when the EEG signal is weak, it may not be noticed, and the data may not be obtained. (3) Since the acquired EEG data is stored in each PC, there is a possibility that the time of data acquisition will be different in each PC. This time, we developed a system to collect brain wave data on the server side. This system overcame the above disadvantages.Keywords: artificial teacher, e-learning, self-study system, simple EEG
Procedia PDF Downloads 1454564 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs
Authors: Ignitia Motjolopane
Abstract:
Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.Keywords: business models, innovation, generative AI, small medium enterprises
Procedia PDF Downloads 714563 Enhancing the Performance of Bug Reporting System by Handling Duplicate Reporting Reports: Artificial Intelligence Based Mantis
Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin
Abstract:
Bug reporting systems are most important tool that guides regarding different maintenance activities in software engineering. Duplicate bug reports which describe the bugs and issues in bug reporting system repository increases processing time of bug triage that monitors all such activities and software programmers who are working and spending time on reports which were assigned by triage. These reports can reveal imperfections and degrade software quality. As there is a number of the potential duplicate bug reports increases, the number of bug reports in bug repository increases. Identifying duplicate bug reports help in decreasing development work load in fixing defects. However, it is difficult to manually identify all possible duplicates because of the huge number of already reported bug reports. In this paper, an artificial intelligence based system using Mantis is proposed to automatically detect duplicate bug reports. When new bugs are submitted to repository triages will mark it with a tag. It will investigate that whether it is a duplicate of an existing bug report by matching or not. Reports with duplicate tags will be eliminated from the repository which not only will improve the performance of the system but can also save cost and effort waste on bug triage and finding the duplicate bug.Keywords: bug tracking, triager, tool, quality assurance
Procedia PDF Downloads 1944562 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 1314561 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: queueing network, discrete-event simulation, health applications, SPT
Procedia PDF Downloads 1874560 Development of Energy Management System Based on Internet of Things Technique
Authors: Wen-Jye Shyr, Chia-Ming Lin, Hung-Yun Feng
Abstract:
The purpose of this study was to develop an energy management system for university campuses based on the Internet of Things (IoT) technique. The proposed IoT technique based on WebAccess is used via network browser Internet Explore and applies TCP/IP protocol. The case study of IoT for lighting energy usage management system was proposed. Structure of proposed IoT technique included perception layer, equipment layer, control layer, application layer and network layer.Keywords: energy management, IoT technique, sensor, WebAccess
Procedia PDF Downloads 3344559 Using Two-Mode Network to Access the Connections of Film Festivals
Authors: Qiankun Zhong
Abstract:
In a global cultural context, film festival awards become authorities to define the aesthetic value of films. To study which genres and producing countries are valued by different film festivals and how those evaluations interact with each other, this research explored the interactions between the film festivals through their selection of movies and the factors that lead to the tendency of film festivals to nominate the same movies. To do this, the author employed a two-mode network on the movies that won the highest awards at five international film festivals with the highest attendance in the past ten years (the Venice Film Festival, the Cannes Film Festival, the Toronto International Film Festival, Sundance Film Festival, and the Berlin International Film Festival) and the film festivals that nominated those movies. The title, genre, producing country and language of 50 movies, and the range (regional, national or international) and organizing country or area of 129 film festivals were collected. These created networks connected by nominating the same films and awarding the same movies. The author then assessed the density and centrality of these networks to answer the question: What are the film festivals that tend to have more shared values with other festivals? Based on the Eigenvector centrality of the two-mode network, Palm Springs, Robert Festival, Toronto, Chicago, and San Sebastian are the festivals that tend to nominate commonly appreciated movies. In contrast, Black Movie Film Festival has the unique value of generally not sharing nominations with other film festivals. A homophily test was applied to access the clustering effects of film and film festivals. The result showed that movie genres (E-I index=0.55) and geographic location (E-I index=0.35) are possible indicators of film festival clustering. A blockmodel was also created to examine the structural roles of the film festivals and their meaning in real-world context. By analyzing the same blocks with film festival attributes, it was identified that film festivals either organized in the same area, with the same history, or with the same attitude on independent films would occupy the same structural roles in the network. Through the interpretation of the blocks, language was identified as an indicator that contributes to the role position of a film festival. Comparing the result of blockmodeling in the different periods, it is seen that international film festivals contrast with the Hollywood industry’s dominant value. The structural role dynamics provide evidence for a multi-value film festival network.Keywords: film festivals, film studies, media industry studies, network analysis
Procedia PDF Downloads 3164558 Thick Data Analytics for Learning Cataract Severity: A Triplet Loss Siamese Neural Network Model
Authors: Jinan Fiaidhi, Sabah Mohammed
Abstract:
Diagnosing cataract severity is an important factor in deciding to undertake surgery. It is usually conducted by an ophthalmologist or through taking a variety of fundus photography that needs to be examined by the ophthalmologist. This paper carries out an investigation using a Siamese neural net that can be trained with small anchor samples to score cataract severity. The model used in this paper is based on a triplet loss function that takes the ophthalmologist best experience in rating positive and negative anchors to a specific cataract scaling system. This approach that takes the heuristics of the ophthalmologist is generally called the thick data approach, which is a kind of machine learning approach that learn from a few shots. Clinical Relevance: The lens of the eye is mostly made up of water and proteins. A cataract occurs when these proteins at the eye lens start to clump together and block lights causing impair vision. This research aims at employing thick data machine learning techniques to rate the severity of the cataract using Siamese neural network.Keywords: thick data analytics, siamese neural network, triplet-loss model, few shot learning
Procedia PDF Downloads 1114557 Assessment of Water Quality Network in Karoon River by Dynamic Programming Approach (DPA)
Authors: M. Nasri Nasrabadi, A. A. Hassani
Abstract:
Karoon is one of the greatest and longest rivers of Iran, which because of the existence of numerous industrial, agricultural centers and drinking usage, has a strategic situation in the west and southwest parts of Iran, and the optimal monitoring of its water quality is an essential and indispensable national issue. Due to financial constraints, water quality monitoring network design is an efficient way to manage water quality. The most crucial part is to find appropriate locations for monitoring stations. Considering the objectives of water usage, we evaluate existing water quality sampling stations of this river. There are several methods for assessment of existing monitoring stations such as Sanders method, multiple criteria decision making and dynamic programming approach (DPA) which DPA opted in this study. The results showed that due to the drinking water quality index out of 20 existing monitoring stations, nine stations should be retained on the river, that include of Gorgor-Band-Ghir of A zone, Dez-Band-Ghir of B zone, Teir, Pole Panjom and Zargan of C zone, Darkhoein, Hafar, Chobade, and Sabonsazi of D zone. In additional, stations of Dez river have the best conditions.Keywords: DPA, karoon river, network monitoring, water quality, sampling site
Procedia PDF Downloads 3774556 Analysis of the IEEE 802.15.4 MAC Parameters to Achive Lower Packet Loss Rates
Authors: Imen Bouazzi
Abstract:
The IEEE-802.15.4 standard utilizes the CSMA-CA mechanism to control nodes access to the shared wireless communication medium. It is becoming the popular choice for various applications of surveillance and control used in wireless sensor network (WSN). The benefit of this standard is evaluated regarding of the packet loss probability who depends on the configuration of IEEE 802.15.4 MAC parameters and the traffic load. Our exigency is to evaluate the effects of various configurable MAC parameters on the performance of beaconless IEEE 802.15.4 networks under different traffic loads, static values of IEEE 802.15.4 MAC parameters (macMinBE, macMaxCSMABackoffs, and macMaxFrame Retries) will be evaluated. To performance analysis, we use ns-2[2] network simulator.Keywords: WSN, packet loss, CSMA/CA, IEEE-802.15.4
Procedia PDF Downloads 3404555 An Investigation of the Effects of Emotional Experience Induction on Mirror Neurons System Activity with Regard to Spectrum of Depressive Symptoms
Authors: Elyas Akbari, Jafar Hasani, Newsha Dehestani, Mohammad Khaleghi, Alireza Moradi
Abstract:
The aim of the present study was to assess the effect of emotional experience induction in the mirror neurons systems (MNS) activity with regard to the spectrum of depressive symptoms. For this purpose, at first stage, 449 students of Kharazmi University of Tehran were selected randomly and completed the second version of the Beck Depression Inventory (BDI-II). Then, 36 students with standard Z-score equal or above +1.5 and equal or equal or below -1.5 were selected to construct two groups of high and low spectrum of depressive symptoms. In the next stage, the basic activity of MNS was recorded (mu wave) before presenting the positive and negative emotional video clips by Electroencephalography (EEG) technique. The findings related to emotion induction (neutral, negative and positive emotion) demonstrated that the activity of recorded mirror neuron areas had a significant difference between the depressive and non-depressive groups. These findings suggest that probably processing of negative emotions in depressive individuals is due to the idea that the mirror neurons in motor cortex matched up the activity of cognitive regions with the person’s schema. Considering the results of the present study, it could be said that the MNS provides a substrate where emotional disorders can be studied and evaluated.Keywords: emotional experiences, mirror neurons, depressive symptoms, negative and positive emotion
Procedia PDF Downloads 3584554 Urban Neighborhood Center Location Evaluating Method Based On UNA the GIS Spatial Analysis Tools: Kerman's Neighborhood in Tehran Case
Authors: Sepideh Jabbari Behnam, Shadabeh Gashtasbi Iraei, Elnaz Mohsenin, MohammadAli Aghajani
Abstract:
Urban neighborhoods, as important urban forming cells, play a key role in creating urban texture and integrated form. Nowadays, most of neighborhood divisions are based on urban management systems but without considering social issues and the other aspects of urban life. This can cause problems such as providing inappropriate services for city dwellers, the loss of local identity and etc. In this regard for regenerating of such neighborhoods, it is essential to locate neighborhood centers with appropriate access and services for all residents. The main objective of this article is reaching to the location of neighborhood centers in a way that, most of issues relating to the physical features (such as the form of access network and texture permeability and etc.) and other qualities such as land uses, densities and social and economic features can be done simultaneously. This paper attempts to use methods of spatial analysis in order to surveying spatial structure and space syntax of urban textures and Urban Network Analysis Systems. This can be done by one of GIS toolbars which is named UNA (Urban Network Analysis) with the use of its five functions (include: Reach, Betweenness, Gravity, Closeness, Straightness).These functions were written according to space syntax theory and offer its relating output. This paper tries to locate and evaluate the optimal location of neighborhood centers in order to create local centers. This is done through weighing of each of these functions and taking into account of spatial features.Keywords: evaluate optimal location, Local centers, location of neighborhood centers, Spatial analysis, Urban network
Procedia PDF Downloads 4634553 Alternative Key Exchange Algorithm Based on Elliptic Curve Digital Signature Algorithm Certificate and Usage in Applications
Authors: A. Andreasyan, C. Connors
Abstract:
The Elliptic Curve Digital Signature algorithm-based X509v3 certificates are becoming more popular due to their short public and private key sizes. Moreover, these certificates can be stored in Internet of Things (IoT) devices, with limited resources, using less memory and transmitted in network security protocols, such as Internet Key Exchange (IKE), Transport Layer Security (TLS) and Secure Shell (SSH) with less bandwidth. The proposed method gives another advantage, in that it increases the performance of the above-mentioned protocols in terms of key exchange by saving one scalar multiplication operation.Keywords: cryptography, elliptic curve digital signature algorithm, key exchange, network security protocol
Procedia PDF Downloads 1464552 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation
Procedia PDF Downloads 5324551 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization
Procedia PDF Downloads 3014550 Green Innovation and Artificial Intelligence in Service
Authors: Fatemeh Khalili Varnamkhasti
Abstract:
Numerous nations have recognized the critical ought to address natural issues, such as discuss contamination, squander transfer, worldwide warming, and common asset consumption, through the application of green innovation. The rise of cleverly advances has driven mechanical basic changes that will offer assistance accomplish carbon decrease. Manufactured insights (AI) innovation is an imperative portion of digitalization, giving unused mechanical apparatuses and bearings for the moo carbon advancement of endeavors. Quickening the brilliantly change of fabricating industry is an critical vital choice to realize the green advancement change. The reason why fabricating insights can advance the advancement of green advancement execution is that fabricating insights is conducive to the generation of "innovation advancement impact" and "fetched decrease impact" so as to advance green innovation advancement, at that point viably increment the alluring yields and essentially diminish the undesirable yields. AI improvement will boost GTI as it were when the escalated of natural direction and organization environment is over a certain edge esteem. In any case, the AI improvement spoken to by mechanical robot applications still has no self-evident impact on GTI, indeed, when the R&D venture surpasses a certain edge.Keywords: greenhouse gas emissions, green infrastructure, artificial intelligence, environmental protection
Procedia PDF Downloads 704549 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 794548 Emotion Classification Using Recurrent Neural Network and Scalable Pattern Mining
Authors: Jaishree Ranganathan, MuthuPriya Shanmugakani Velsamy, Shamika Kulkarni, Angelina Tzacheva
Abstract:
Emotions play an important role in everyday life. An-alyzing these emotions or feelings from social media platforms like Twitter, Facebook, blogs, and forums based on user comments and reviews plays an important role in various factors. Some of them include brand monitoring, marketing strategies, reputation, and competitor analysis. The opinions or sentiments mined from such data helps understand the current state of the user. It does not directly provide intuitive insights on what actions to be taken to benefit the end user or business. Actionable Pattern Mining method provides suggestions or actionable recommendations on what changes or actions need to be taken in order to benefit the end user. In this paper, we propose automatic classification of emotions in Twitter data using Recurrent Neural Network - Gated Recurrent Unit. We achieve training accuracy of 87.58% and validation accuracy of 86.16%. Also, we extract action rules with respect to the user emotion that helps to provide actionable suggestion.Keywords: emotion mining, twitter, recurrent neural network, gated recurrent unit, actionable pattern mining
Procedia PDF Downloads 1684547 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System
Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad
Abstract:
The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3
Procedia PDF Downloads 2044546 Distributed Generation Connection to the Network: Obtaining Stability Using Transient Behavior
Authors: A. Hadadi, M. Abdollahi, A. Dustmohammadi
Abstract:
The growing use of DGs in distribution networks provide many advantages and also cause new problems which should be anticipated and be solved with appropriate solutions. One of the problems is transient voltage drop and short circuit in the electrical network, in the presence of distributed generation - which can lead to instability. The appearance of the short circuit will cause loss of generator synchronism, even though if it would be able to recover synchronizing mode after removing faulty generator, it will be stable. In order to increase system reliability and generator lifetime, some strategies should be planned to apply even in some situations which a fault prevent generators from separation. In this paper, one fault current limiter is installed due to prevent DGs separation from the grid when fault occurs. Furthermore, an innovative objective function is applied to determine the impedance optimal amount of fault current limiter in order to improve transient stability of distributed generation. Fault current limiter can prevent generator rotor's sudden acceleration after fault occurrence and thereby improve the network transient stability by reducing the current flow in a fast and effective manner. In fact, by applying created impedance by fault current limiter when a short circuit happens on the path of current injection DG to the fault location, the critical fault clearing time improve remarkably. Therefore, protective relay has more time to clear fault and isolate the fault zone without any instability. Finally, different transient scenarios of connection plan sustainability of small scale synchronous generators to the distribution network are presented.Keywords: critical clearing time, fault current limiter, synchronous generator, transient stability, transient states
Procedia PDF Downloads 1964545 ArcGIS as a Tool for Infrastructure Documentation and Asset Management: Establishing a GIS for Computer Network Documentation
Authors: John Segars
Abstract:
Built out of a real-world need to have better, more detailed, asset and infrastructure documentation, this project will lay out the case for using the database functionality of ArcGIS as a tool to track and maintain infrastructure location, status, maintenance and serviceability. Workflows and processes will be presented and detailed which may be applied to an organizations’ infrastructure needs that might allow them to make use of the robust tools which surround the ArcGIS platform. The end result is a value-added information system framework with a geographic component e.g., the spatial location of various I.T. assets, a detailed set of records which not only documents location but also captures the maintenance history for assets along with photographs and documentation of these various assets as attachments to the numerous feature class items. In addition to the asset location and documentation benefits, the staff will be able to log into the devices and pull SNMP (Simple Network Management Protocol) based query information from within the user interface. The entire collection of information may be displayed in ArcGIS, via a JavaScript based web application or via queries to the back-end database. The project is applicable to all organizations which maintain an IT infrastructure but specifically targets post-secondary educational institutions where access to ESRI resources is generally already available in house.Keywords: ESRI, GIS, infrastructure, network documentation, PostgreSQL
Procedia PDF Downloads 1814544 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: big data, k-NN, machine learning, traffic speed prediction
Procedia PDF Downloads 3634543 Microstructural Interactions of Ag and Sc Alloying Additions during Casting and Artificial Ageing to a T6 Temper in a A356 Aluminium Alloy
Authors: Dimitrios Bakavos, Dimitrios Tsivoulas, Chaowalit Limmaneevichitr
Abstract:
Aluminium cast alloys, of the Al-Si system, are widely used for shape castings. Their microstructures can be further improved on one hand, by alloying modification and on the other, by optimised artificial ageing. In this project four hypoeutectic Al-alloys, the A356, A356+ Ag, A356+Sc, and A356+Ag+Sc have been studied. The interactions of Ag and Sc during solidification and artificial ageing at 170°C to a T6 temper have been investigated in details. The evolution of the eutectic microstructure is studied by thermal analysis and interrupted solidification. The ageing kinetics of the alloys has been identified by hardness measurements. The precipitate phases, number density, and chemical composition has been analysed by means of transmission electron microscopy (TEM) and EDS analysis. Furthermore, the SHT effect onto the Si eutectic particles for the four alloys has been investigated by means of optical microscopy, image analysis, and the UTS strength has been compared with the UTS of the alloys after casting. The results suggest that the Ag additions, significantly enhance the ageing kinetics of the A356 alloy. The formation of β” precipitates were kinetically accelerated and an increase of 8% and 5% in peak hardness strength has been observed compared to the base A356 and A356-Sc alloy. The EDS analysis demonstrates that Ag is present on the β” precipitate composition. After prolonged ageing 100 hours at 170°C, the A356-Ag exhibits 17% higher hardness strength compared to the other three alloys. During solidification, Sc additions change the macroscopic eutectic growth mode to the propagation of a defined eutectic front from the mold walls opposite to the heat flux direction. In contrast, Ag has no significance effect on the solidification mode revealing a macroscopic eutectic growth similar to A356 base alloy. However, the mechanical strength of the as cast A356-Ag, A356-Sc, and A356+Ag+Sc additions has increased by 5, 30, and 35 MPa, respectively. The outcome is a tribute to the refining of the eutectic Si that takes place which it is strong in the A356-Sc alloy and more profound when silver and scandium has been combined. Moreover after SHT the Al alloy with the highest mechanical strength, is the one with Ag additions, in contrast to the as-cast condition where the Sc and Sc+Ag alloy was the strongest. The increase of strength is mainly attributed to the dissolution of grain boundary precipitates the increase of the solute content into the matrix, the spherodisation, and coarsening of the eutectic Si. Therefore, we could safely conclude for an A356 hypoeutectic alloy additions of: Ag exhibits a refining effect on the Si eutectic which is improved when is combined with Sc. In addition Ag enhance, the ageing kinetics increases the hardness and retains its strength at prolonged artificial ageing in a Al-7Si 0.3Mg hypoeutectic alloy. Finally the addition of Sc is beneficial due to the refinement of the α-Al grain and modification-refinement of the eutectic Si increasing the strength of the as-cast product.Keywords: ageing, casting, mechanical strength, precipitates
Procedia PDF Downloads 498