Search results for: distributed network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6363

Search results for: distributed network

4083 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks

Authors: Naghmeh Moradpoor Sheykhkanloo

Abstract:

Structured Query Language Injection (SQLI) attack is a code injection technique in which malicious SQL statements are inserted into a given SQL database by simply using a web browser. Losing data, disclosing confidential information or even changing the value of data are the severe damages that SQLI attack can cause on a given database. SQLI attack has also been rated as the number-one attack among top ten web application threats on Open Web Application Security Project (OWASP). OWASP is an open community dedicated to enabling organisations to consider, develop, obtain, function, and preserve applications that can be trusted. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLI attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLI attack categories, and an NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLI attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.

Keywords: neural networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection

Procedia PDF Downloads 454
4082 Elucidation of Dynamics of Murine Double Minute 2 Shed Light on the Anti-cancer Drug Development

Authors: Nigar Kantarci Carsibasi

Abstract:

Coarse-grained elastic network models, namely Gaussian network model (GNM) and Anisotropic network model (ANM), are utilized in order to investigate the fluctuation dynamics of Murine Double Minute 2 (MDM2), which is the native inhibitor of p53. Conformational dynamics of MDM2 are elucidated in unbound, p53 bound, and non-peptide small molecule inhibitor bound forms. With this, it is aimed to gain insights about the alterations brought to global dynamics of MDM2 by native peptide inhibitor p53, and two small molecule inhibitors (HDM201 and NVP-CGM097) that are undergoing clinical stages in cancer studies. MDM2 undergoes significant conformational changes upon inhibitor binding, carrying pieces of evidence of induced-fit mechanism. Small molecule inhibitors examined in this work exhibit similar fluctuation dynamics and characteristic mode shapes with p53 when complexed with MDM2, which would shed light on the design of novel small molecule inhibitors for cancer therapy. The results showed that residues Phe 19, Trp 23, Leu 26 reside in the minima of slowest modes of p53, pointing to the accepted three-finger binding model. Pro 27 displays the most significant hinge present in p53 and comes out to be another functionally important residue. Three distinct regions are identified in MDM2, for which significant conformational changes are observed upon binding. Regions I (residues 50-77) and III (residues 90-105) correspond to the binding interface of MDM2, including (α2, L2, and α4), which are stabilized during complex formation. Region II (residues 77-90) exhibits a large amplitude motion, being highly flexible, both in the absence and presence of p53 or other inhibitors. MDM2 exhibits a scattered profile in the fastest modes of motion, while binding of p53 and inhibitors puts restraints on MDM2 domains, clearly distinguishing the kinetically hot regions. Mode shape analysis revealed that the α4 domain controls the size of the cleft by keeping the cleft narrow in unbound MDM2; and open in the bound states for proper penetration and binding of p53 and inhibitors, which points to the induced-fit mechanism of p53 binding. P53 interacts with α2 and α4 in a synchronized manner. Collective modes are shifted upon inhibitor binding, i.e., second mode characteristic motion in MDM2-p53 complex is observed in the first mode of apo MDM2; however, apo and bound MDM2 exhibits similar features in the softest modes pointing to pre-existing modes facilitating the ligand binding. Although much higher amplitude motions are attained in the presence of non-peptide small molecule inhibitor molecules as compared to p53, they demonstrate close similarity. Hence, NVP-CGM097 and HDM201 succeed in mimicking the p53 behavior well. Elucidating how drug candidates alter the MDM2 global and conformational dynamics would shed light on the rational design of novel anticancer drugs.

Keywords: cancer, drug design, elastic network model, MDM2

Procedia PDF Downloads 119
4081 Double Clustering as an Unsupervised Approach for Order Picking of Distributed Warehouses

Authors: Hsin-Yi Huang, Ming-Sheng Liu, Jiun-Yan Shiau

Abstract:

Planning the order picking lists of warehouses to achieve when the costs associated with logistics on the operational performance is a significant challenge. In e-commerce era, this task is especially important productive processes are high. Nowadays, many order planning techniques employ supervised machine learning algorithms. However, the definition of which features should be processed by such algorithms is not a simple task, being crucial to the proposed technique’s success. Against this background, we consider whether unsupervised algorithms can enhance the planning of order-picking lists. A Zone2 picking approach, which is based on using clustering algorithms twice, is developed. A simplified example is given to demonstrate the merit of our approach.

Keywords: order picking, warehouse, clustering, unsupervised learning

Procedia PDF Downloads 147
4080 K-12 Students’ Digital Life: Activities and Attitudes

Authors: Meital Amzalag, Sharon Hardof-Jaffe

Abstract:

In the last few decades, children and youth have been immersed in digital technologies. Indeed, recent studies explored the implication of technology use in their leisure and learning activities. Educators face an essential need to utilize technology and implement them into the curriculum. To do that, educators need to understand how young people use digital technology. This study aims to explore K12 students' digital lives from their point of view, to reveal their digital activities, age and gender differences with respect to digital activities, and to present the students' attitudes towards technologies in learning. The study approach is quantitative and includes354 students ages 6-16 from three schools in Israel. The online questionnaire was based on self-reports and consists of four parts: Digital activities: leisure time activities (such as social networks, gaming types), search activities (information types and platforms), and digital application use (e.g., calendar, notes); Digital skills (requisite digital platform skills such as evaluation and creativity); Social and emotional aspects of digital use (conducting digital activities alone and with friends, feelings, and emotions during digital use such as happiness, bullying); Digital attitudes towards digital integration in learning. An academic ethics board approved the study. The main findings reveal the most popular K12digital activities: Navigating social network sites, watching TV, playing mobile games, seeking information on the internet, and playing computer games. In addition, the findings reveal age differences in digital activities, such as significant differences in the use of social network sites. Moreover, the finding raises gender differences as girls use more social network sites and boys use more digital games, which are characterized by high complexity and challenges. Additionally, we found positive attitudes towards technology integration in school. Students perceive technology as enhancing creativity, promoting active learning, encouraging self-learning, and helping students with learning difficulties. The presentation will provide an up-to-date, accurate picture of the use of various digital technologies by k12 students. In addition, it will discuss the learning potentials of such use and how to implement digital technologies in the curriculum. Acknowledgments: This study is a part of a broader study about K-12 digital life in Israel and is supported by Mofet-the Israel Institute for Teachers'Development.

Keywords: technology and learning, K-12, digital life, gender differences

Procedia PDF Downloads 118
4079 Preliminary Analysis of a Phylogeography Study of Dendropsophus minutus in the Guiana Shield

Authors: Mera-Martínez Daniela

Abstract:

Dendropsophus minutus, is a species distributed in South America including the slopes of the Andes, the Amazon basin, forests of southeastern Brazil and in Guyana where tropical forests are characteristic. The relationship of amphibians found in this locality is evidenced by molecular markers, with the objective of analyzing if the geographic distance is influencing the structure of the populations of D. minutus in Guyana; we analyzed 65 sequences from the 3 localities of Guyana where haplotype networks, Mantel Test and phylogeny were realized to know the influence. It was evidenced that there is a haplotypic difference in the locality of Guyana compared to Suriname and French Guyana, but this does not have a correlation with the geographic distance, but this one can be influenced by the conditions of the places.

Keywords: phylogeography, Dendropsophus, geographic distance, molecular markers

Procedia PDF Downloads 198
4078 Dynamic Fault Diagnosis for Semi-Batch Reactor Under Closed-Loop Control via Independent RBFNN

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a new robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics and using the weighted sum-squared prediction error as the residual. The recursive orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. The several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control

Procedia PDF Downloads 489
4077 Crop Classification using Unmanned Aerial Vehicle Images

Authors: Iqra Yaseen

Abstract:

One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.

Keywords: image processing, UAV, YOLO, CNN, deep learning, classification

Procedia PDF Downloads 97
4076 Effect of the Aluminium Concentration on the Laser Wavelength of Random Trimer Barrier AlxGa1-xAs Superlattices

Authors: Samir Bentata, Fatima Bendahma

Abstract:

We have numerically investigated the effect of Aluminium concentration on the the laser wavelength of random trimer barrier AlxGa1-xAs superlattices (RTBSL). Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that the barriers of one kind appear in triply. An explicit formula is given for evaluating the transmission coefficient of superlattices (SL's) with intentional correlated disorder. The method is based on Airy function formalism and the transfer-matrix technique. We discuss the impact of the Aluminium concentration associate to the structure profile on the laser wavelengths.

Keywords: superlattices, correlated disorder, transmission coefficient, laser wavelength

Procedia PDF Downloads 324
4075 A Novel Approach to Design and Implement Context Aware Mobile Phone

Authors: G. S. Thyagaraju, U. P. Kulkarni

Abstract:

Context-aware computing refers to a general class of computing systems that can sense their physical environment, and adapt their behaviour accordingly. Context aware computing makes systems aware of situations of interest, enhances services to users, automates systems and personalizes applications. Context-aware services have been introduced into mobile devices, such as PDA and mobile phones. In this paper we are presenting a novel approaches used to realize the context aware mobile. The context aware mobile phone (CAMP) proposed in this paper senses the users situation automatically and provides user context required services. The proposed system is developed by using artificial intelligence techniques like Bayesian Network, fuzzy logic and rough sets theory based decision table. Bayesian Network to classify the incoming call (high priority call, low priority call and unknown calls), fuzzy linguistic variables and membership degrees to define the context situations, the decision table based rules for service recommendation. To exemplify and demonstrate the effectiveness of the proposed methods, the context aware mobile phone is tested for college campus scenario including different locations like library, class room, meeting room, administrative building and college canteen.

Keywords: context aware mobile, fuzzy logic, decision table, Bayesian probability

Procedia PDF Downloads 358
4074 Bacteriological Characterization of Drinking Water Distribution Network Biofilms by Gene Sequencing Using Different Pipe Materials

Authors: M. Zafar, S. Rasheed, Imran Hashmi

Abstract:

Very little is concerned about the bacterial contamination in drinking water biofilm which provide a potential source for bacteria to grow and increase rapidly. So as to understand the microbial density in DWDs, a three-month study was carried out. The aim of this study was to examine biofilm in three different pipe materials including PVC, PPR and GI. A set of all these pipe materials was installed in DWDs at nine different locations and assessed on monthly basis. Drinking water quality was evaluated by different parameters and characterization of biofilm. Among various parameters are Temperature, pH, turbidity, TDS, electrical conductivity, BOD, COD, total phosphates, total nitrates, total organic carbon (TOC) free chlorine and total chlorine, coliforms and spread plate counts (SPC) according to standard methods. Predominant species were Bacillus thuringiensis, Pseudomonas fluorescens , Staphylococcus haemolyticus, Bacillus safensis and significant increase in bacterial population was observed in PVC pipes while least in cement pipes. The quantity of DWDs bacteria was directly depended on biofilm bacteria and its increase was correlated with growth and detachment of bacteria from biofilms. Pipe material also affected the microbial community in drinking water distribution network biofilm while Similarity in bacterial species was observed between systems due to same disinfectant dose, time period and plumbing pipes.

Keywords: biofilm, DWDs, pipe material, bacterial population

Procedia PDF Downloads 339
4073 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 644
4072 Finding a Redefinition of the Relationship between Rural and Urban Knowledge

Authors: Bianca Maria Rulli, Lenny Valentino Schiaretti

Abstract:

The considerable recent urbanization has increasingly sharpened environmental and social problems all over the world. During the recent years, many answers to the alarming attitudes in modern cities have emerged: a drastic reduction in the rate of growth is becoming essential for future generations and small scale economies are considered more adaptive and sustainable. According to the concept of degrowth, cities should consider surpassing the centralization of urban living by redefining the relationship between rural and urban knowledge; growing food in cities fundamentally contributes to the increase of social and ecological resilience. Through an innovative approach, this research combines the benefits of urban agriculture (increase of biological diversity, shorter and thus more efficient supply chains, food security) and temporary land use. They stimulate collaborative practices to satisfy the changing needs of communities and stakeholders. The concept proposes a coherent strategy to create a sustainable development of urban spaces, introducing a productive green-network to link specific areas in the city. By shifting the current relationship between architecture and landscape, the former process of ground consumption is deeply revised. Temporary modules can be used as concrete tools to create temporal areas of innovation, transforming vacant or marginal spaces into potential laboratories for the development of the city. The only permanent ground traces, such as foundations, are minimized in order to allow future land re-use. The aim is to describe a new mindset regarding the quality of space in the metropolis which allows, in a completely flexible way, to bring back the green and the urban farming into the cities. The wide possibilities of the research are analyzed in two different case-studies. The first is a regeneration/connection project designated for social housing, the second concerns the use of temporary modules to answer to the potential needs of social structures. The intention of the productive green-network is to link the different vacant spaces to each other as well as to the entire urban fabric. This also generates a potential improvement of the current situation of underprivileged and disadvantaged persons.

Keywords: degrowth, green network, land use, temporary building, urban farming

Procedia PDF Downloads 492
4071 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model

Procedia PDF Downloads 214
4070 Scope of Virtualization

Authors: Pavneet Kaur, Palak Sharma

Abstract:

Virtualization is a term that basically describe creation of virtual version of something like operating system, network, etc. Virtualization is a technology which is in use from 1970, but with new developments and inventions, it is now useful in education, software development etc. This paper will describe basic introduction of virtualization, along with its various categories. It will also describe use of virtualization in software engineering, its various benefits and shortcomings.

Keywords: virtualization, hardware, software, os

Procedia PDF Downloads 361
4069 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 100
4068 Seasonal Short-Term Effect of Air Pollution on Cardiovascular Mortality in Belgium

Authors: Natalia Bustos Sierra, Katrien Tersago

Abstract:

It is currently proven that both extremes of temperature are associated with increased mortality and that air pollution is associated with temperature. This relationship is complex, and in countries with important seasonal variations in weather such as Belgium, some effects can appear as non-significant when the analysis is done over the entire year. We, therefore, analyzed the effect of short-term outdoor air pollution exposure on cardiovascular mortality during the warmer and colder months separately. We used daily cardiovascular deaths from acute cardiovascular diagnostics according to the International Classification of Diseases, 10th Revision (ICD-10: I20-I24, I44-I49, I50, I60-I66) during the period 2008-2013. The environmental data were population-weighted concentrations of particulates with an aerodynamic diameter less than 10 µm (PM₁₀) and less than 2.5 µm (PM₂.₅) (daily average), nitrogen dioxide (NO₂) (daily maximum of the hourly average) and ozone (O₃) (daily maximum of the 8-hour running mean). A Generalized linear model was applied adjusting for the confounding effect of season, temperature, dew point temperature, the day of the week, public holidays and the incidence of influenza-like illness (ILI) per 100,000 inhabitants. The relative risks (RR) were calculated for an increase of one interquartile range (IQR) of the air pollutant (μg/m³). These were presented for the four hottest months (June, July, August, September) and coldest months (November, December, January, February) in Belgium. We applied both individual lag model and unconstrained distributed lag model methods. The cumulative effect of a four-day exposure (day of exposure and three consecutive days) was calculated from the unconstrained distributed lag model. The IQR for PM₁₀, PM₂.₅, NO₂, and O₃ were respectively 8.2, 6.9, 12.9 and 25.5 µg/m³ during warm months and 18.8, 17.6, 18.4 and 27.8 µg/m³ during cold months. The association with CV mortality was statistically significant for the four pollutants during warm months and only for NO₂ during cold months. During the warm months, the cumulative effect of an IQR increase of ozone for the age groups 25-64, 65-84 and 85+ was 1.066 (95%CI: 1.002-1.135), 1.041 (1.008-1.075) and 1.036 (1.013-1.058) respectively. The cumulative effect of an IQR increase of NO₂ for the age group 65-84 was 1.066 (1.020-1.114) during warm months and 1.096 (1.030-1.166) during cold months. The cumulative effect of an IQR increase of PM₁₀ during warm months reached 1.046 (1.011-1.082) and 1.038 (1.015-1.063) for the age groups 65-84 and 85+ respectively. Similar results were observed for PM₂.₅. The short-term effect of air pollution on cardiovascular mortality is greater during warm months for lower pollutant concentrations compared to cold months. Spending more time outside during warm months increases population exposure to air pollution and can, therefore, be a confounding factor for this association. Age can also affect the length of time spent outdoors and the type of physical activity exercised. This study supports the deleterious effect of air pollution on cardiovascular mortality (CV) which varies according to season and age groups in Belgium. Public health measures should, therefore, be adapted to seasonality.

Keywords: air pollution, cardiovascular, mortality, season

Procedia PDF Downloads 153
4067 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 232
4066 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 175
4065 Prediction of Road Accidents in Qatar by 2022

Authors: M. Abou-Amouna, A. Radwan, L. Al-kuwari, A. Hammuda, K. Al-Khalifa

Abstract:

There is growing concern over increasing incidences of road accidents and consequent loss of human life in Qatar. In light to the future planned event in Qatar, World Cup 2022; Qatar should put into consideration the future deaths caused by road accidents, and past trends should be considered to give a reasonable picture of what may happen in the future. Qatar roads should be arranged and paved in a way that accommodate high capacity of the population in that time, since then there will be a huge number of visitors from the world. Qatar should also consider the risk issues of road accidents raised in that period, and plan to maintain high level to safety strategies. According to the increase in the number of road accidents in Qatar from 1995 until 2012, an analysis of elements affecting and causing road accidents will be effectively studied. This paper aims to identify and criticize the factors that have high effect on causing road accidents in the state of Qatar, and predict the total number of road accidents in Qatar 2022. Alternative methods are discussed and the most applicable ones according to the previous researches are selected for further studies. The methods that satisfy the existing case in Qatar were the multiple linear regression model (MLR) and artificial neutral network (ANN). Those methods are analyzed and their findings are compared. We conclude that by using MLR the number of accidents in 2022 will become 355,226 accidents, and by using ANN 216,264 accidents. We conclude that MLR gave better results than ANN because the artificial neutral network doesn’t fit data with large range varieties.

Keywords: road safety, prediction, accident, model, Qatar

Procedia PDF Downloads 248
4064 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa

Authors: Olumuyiwa Ojo, Masengo Ilunga

Abstract:

Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.

Keywords: ANN, artificial neural network, wastewater treatment, model, development

Procedia PDF Downloads 141
4063 Secure Multiparty Computations for Privacy Preserving Classifiers

Authors: M. Sumana, K. S. Hareesha

Abstract:

Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.

Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data

Procedia PDF Downloads 405
4062 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 106
4061 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 112
4060 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy

Procedia PDF Downloads 134
4059 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 91
4058 Reservoir Inflow Prediction for Pump Station Using Upstream Sewer Depth Data

Authors: Osung Im, Neha Yadav, Eui Hoon Lee, Joong Hoon Kim

Abstract:

Artificial Neural Network (ANN) approach is commonly used in lots of fields for forecasting. In water resources engineering, forecast of water level or inflow of reservoir is useful for various kind of purposes. Due to advantages of ANN, many papers were written for inflow prediction in river networks, but in this study, ANN is used in urban sewer networks. The growth of severe rain storm in Korea has increased flood damage severely, and the precipitation distribution is getting more erratic. Therefore, effective pump operation in pump station is an essential task for the reduction in urban area. If real time inflow of pump station reservoir can be predicted, it is possible to operate pump effectively for reducing the flood damage. This study used ANN model for pump station reservoir inflow prediction using upstream sewer depth data. For this study, rainfall events, sewer depth, and inflow into Banpo pump station reservoir between years of 2013-2014 were considered. Feed – Forward Back Propagation (FFBF), Cascade – Forward Back Propagation (CFBP), Elman Back Propagation (EBP) and Nonlinear Autoregressive Exogenous (NARX) were used as ANN model for prediction. A comparison of results with ANN model suggests that ANN is a powerful tool for inflow prediction using the sewer depth data.

Keywords: artificial neural network, forecasting, reservoir inflow, sewer depth

Procedia PDF Downloads 301
4057 Pathway to Sustainable Shipping: Electric Ships

Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang

Abstract:

Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.

Keywords: cost reduction, electric ship, environmental protection, sustainable shipping

Procedia PDF Downloads 66
4056 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 109
4055 Internet of Assets: A Blockchain-Inspired Academic Program

Authors: Benjamin Arazi

Abstract:

Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.

Keywords: blockchain, education, financial technology, mobile telecommunications services

Procedia PDF Downloads 169
4054 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 160