Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2878

World Academy of Science, Engineering and Technology

[Computer and Information Engineering]

Online ISSN : 1307-6892

2878 3D Images Representation to Provide Information on the Type of Castella Beams Hole

Authors: Cut Maisyarah Karyati, Aries Muslim, Sulardi


Digital image processing techniques to obtain detailed information from an image have been used in various fields, including in civil engineering, where the use of solid beam profiles in buildings and bridges has often been encountered since the early development of beams. Along with this development, the founded castellated beam profiles began to be more diverse in shape, such as the shape of a hexagon, triangle, pentagon, circle, ellipse and oval that could be a practical solution in optimizing a construction because of its characteristics. The purpose of this research is to create a computer application to edge detect the profile of various shapes of the castella beams hole. The digital image segmentation method has been used to obtain the grayscale images and represented in 2D and 3D formats. This application has been successfully made according to the desired function, which is to provide information on the type of castella beam hole.

Keywords: digital image, image processing, edge detection, grayscale, castella beams

Procedia PDF Downloads 0
2877 Challenges in Teaching Code of Ethics and Professional Conduct

Authors: Rasika Dayarathna


Computing has reached every corner of our lives in many forms. The Internet, particularly Social Media, Artificial Intelligence, are prominent among them. As a result, computing has changed our lives and it is expected that severe changes will take place in the coming years. It has introduced a new set of ethical challenges and amplified the existing ethical challenges. It is the duty of everyone involved from conceptualizing, designing, implementing, deploying, and using to follow generally accepted practices in order to avoid or minimize harm and improve the quality of life. Since computing in various forms mentioned above has a significant impact on our lives, various codes of conduct and standards have been introduced. Among many, the ACM (Association of Computing Machinery) Code of Ethics and Professional Conduct is a leading one. This was drafted for everyone, including aspiring computing professionals. However, teaching a code of conduct for aspiring computing professionals is very challenging since this universal code needs to be taught for young computing professionals in a local setting where there are value mismatches and exposure to information systems. This paper discusses the importance of teaching the code, how to overcome the challenges, and suggestions to improve the code to make it more appealing and buying in. It is expected that the improved approach would contribute to improving the quality of life.

Keywords: code of conduct, professionalism, ethics, code of ethics, ethics education, moral development

Procedia PDF Downloads 9
2876 Twitter Sentiment Analysis during the Lockdown on New-Zealand

Authors: Smah Almotiri


One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2020, until April 4, 2020. Natural language processing (NLP), which is a form of Artificial intelligence, was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applying machine learning sentimental methods such as Crystal Feel and extending the size of the sample tweet by using multiple tweets over a longer period of time.

Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS

Procedia PDF Downloads 1
2875 Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect

Authors: Maha Jazouli


Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.

Keywords: suicide detection, Kinect azure, RGB-D camera, SVM, machine learning, gesture recognition

Procedia PDF Downloads 7
2874 BigFeat: Scalable and Interpretable Automated Feature Engineering

Authors: Hassan Eldeeb, Shota Amashukeli, Radwa El Shawi


Automated feature engineering is a key value-adding step that automatically constructs informative features and reduces manual labor for building well-performing machine learning pipelines. This paper presents a scalable and interpretable automated feature engineering framework, BigFeat, that optimizes input features' quality to maximize predictive performance. BigFeat employs a dynamic feature generation and selection mechanism that keeps a small set of expressive features that improve the prediction performance while balancing between exploiting potential operations and exploring unpromising ones. \iffalse BigFeat employs an efficient mechanism for feature selection based on evaluating the importance of each candidate feature and selecting a small set of meaningful features which improve the prediction performance while retaining interpretability. We compare the performance of BigFeat to the state-of-the-art feature engineering framework, AutoFeat, and experimentally evaluated the two approaches on 36 datasets by integrating both of them with two state-of-the-art automated machine learning (AutoML) frameworks. The results show that BigFeat statistically outperforms AutoFeat and significantly improves the F1-Score of the AutoML frameworks by 4.89% on average for 19 datasets. Besides, the execution time of BigFeat, on average, is 5x faster than AutoFeat, which confirms the scalability of BigFeat.

Keywords: feature engineering, automated machine learning, feature selection, explainable machine learning

Procedia PDF Downloads 6
2873 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance

Authors: Abdullah Al Farwan, and Dr. Ya Zhang


In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.

Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance

Procedia PDF Downloads 6
2872 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network

Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang


As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.

Keywords: GUI, deep learning, GAN, data augmentation

Procedia PDF Downloads 3
2871 Empirical Study of Partitions Similarity Measures

Authors: Abdelkrim Alfalah, Lahcen Ouarbya, John Howroyd


This paper investigates and compares the performance of four existing distances and similarity measures between partitions. The partition measures considered are Rand Index (RI), Adjusted Rand Index (ARI), Variation of Information (VI), and Normalised Variation of Information (NVI). This work investigates the ability of these partition measures to capture three predefined intuitions: the variation within randomly generated partitions, the sensitivity to small perturbations, and finally the independence from the dataset scale. It has been shown that the Adjusted Rand Index performed well overall, with regards to these three intuitions.

Keywords: clustering, comparing partitions, similarity measure, partition distance, partition metric, similarity between partitions, clustering comparison.

Procedia PDF Downloads 6
2870 A Deep Learning Approach for Optimum Shape Design

Authors: Cahit Perkgöz


Artificial intelligence has brought new approaches to solving problems in almost every research field in recent years. One of these topics is shape design and optimization, which has the possibility of applications in many fields, such as nanotechnology and electronics. A properly constructed cost function can eliminate the need for labeled data required in deep learning and create desired shapes. In this work, the network parameters are optimized differentially, which differs from traditional approaches. The methods are tested for physics-related structures and successful results are obtained. This work is supported by Eskişehir Technical University scientific research project (Project No: 20ADP090)

Keywords: deep learning, shape design, optimization, artificial intelligence

Procedia PDF Downloads 4
2869 Frame Camera and Event Camera in Stereo Pair for High-Resolution Sensing

Authors: Khen Cohen, Daniel Yankelevich, David Mendlovic, Dan Raviv


We present a 3D stereo system for high-resolution sensing in both the spatial and the temporal domains by combining a frame-based camera and an event-based camera. We establish a method to merge both devices into one unite system and introduce a calibration process, followed by a correspondence technique and interpolation algorithm for 3D reconstruction. We further provide quantitative analysis about our system in terms of depth resolution and additional parameter analysis. We show experimentally how our system performs temporal super-resolution up to effectively 1ms and can detect fast-moving objects and human micro-movements that can be used for micro-expression analysis. We also demonstrate how our method can extract colored events for an event-based camera without any degradation in the spatial resolution, compared to a colored filter array.

Keywords: DVS-CIS stereo vision, micro-movements, temporal super-resolution, 3D reconstruction

Procedia PDF Downloads 5
2868 Iterative White Balance Adjustment Process in Production Line

Authors: Onur Onder, Celal Tanuca, Mahir Ozil, Halil Sen, Alkım Ozkan, Engin Ceylan, Ali Istek, Ozgur Saglam


White balance adjustment of LCD TVs is an important procedure which has a direct influence on quality perception. Existing methods adjust RGB gain and offset values in different white levels during production. This paper suggests an iterative method in which the gamma is pre-adjusted during the design stage, and only 80% white is adjusted during production by modifying only RGB gain values (offset values are not modified). This method reduces the white balance adjustment time, contributing to the total efficiency of the production. Experiment shows that the adjustment results are well within requirements.

Keywords: color temperature, LCD panel deviation, LCD TV manufacturing, white balance

Procedia PDF Downloads 0
2867 On Stability of Stochastic Differential Equations with Non Trivial Solutions

Authors: Fakhreddin Abedi, Wah June Leong


Exponential stability of stochastic differential equations with non-trivial solutions is provided in terms of Lyapunov functions. The main result of this paper establishes that, under certain hypotheses for the dynamics f (.) and g(.), practical exponential stability in probability at the small neighborhood of the origin is equivalent to the existence of an appropriate Lyapunov function. Indeed, we establish exponential stability of stochastic differential equations when almost all the state trajectories are bounded and approach a sufficiently small neighborhood of the origin. We derive sufficient conditions for the exponential stability of stochastic differential equations. Finally, we give a numerical example illustrating our results.

Keywords: exponential stability in probability, stochastic differential equations, Lyapunov technique, Ito’s formula

Procedia PDF Downloads 5
2866 Predicting Stack Overflow Accepted Answers Using Features and Models with Varying Degrees of Complexity

Authors: Osayande Pascal Omondiagbe, Sherlock a Licorish


Stack Overflow is a popular community question and answer portal which is used by practitioners to solve technology-related challenges during software development. Previous studies have shown that this forum is becoming a substitute for official software programming languages documentation. While tools have looked to aid developers by presenting interfaces to explore Stack Overflow, developers often face challenges searching through many possible answers to their questions, and this extends the development time. To this end, researchers have provided ways of predicting acceptable Stack Overflow answers by using various modeling techniques. However, less interest is dedicated to examining the performance and quality of typically used modeling methods, and especially in relation to models’ and features’ complexity. Such insights could be of practical significance to the many practitioners that use Stack Overflow. This study examines the performance and quality of various modeling methods that are used for predicting acceptable answers on Stack Overflow, drawn from 2014, 2015 and 2016. Our findings reveal significant differences in models’ performance and quality given the type of features and complexity of models used. Researchers examining classifiers’ performance and quality and features’ complexity may leverage these findings in selecting suitable techniques when developing prediction models.

Keywords: feature selection, modeling and prediction, neural network, random forest, stack overflow

Procedia PDF Downloads 11
2865 Plant Leaf Recognition Using Deep Learning

Authors: Aadhya Kaul, Gautam Manocha, Preeti Nagrath


Our environment comprises of a wide variety of plants that are similar to each other and sometimes the similarity between the plants makes the identification process tedious thus increasing the workload of the botanist all over the world. Now all the botanists cannot be accessible all the time for such laborious plant identification; therefore, there is an urge for a quick classification model. Also, along with the identification of the plants, it is also necessary to classify the plant as healthy or not as for a good lifestyle, humans require good food and this food comes from healthy plants. A large number of techniques have been applied to classify the plants as healthy or diseased in order to provide the solution. This paper proposes one such method known as anomaly detection using autoencoders using a set of collections of leaves. In this method, an autoencoder model is built using Keras and then the reconstruction of the original images of the leaves is done and the threshold loss is found in order to classify the plant leaves as healthy or diseased. A dataset of plant leaves is considered to judge the reconstructed performance by convolutional autoencoders and the average accuracy obtained is 71.55% for the purpose.

Keywords: convolutional autoencoder, anomaly detection, web application, FLASK

Procedia PDF Downloads 12
2864 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen


To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 3
2863 Condition Monitoring of Vehicle Suspension - A Machine Learning Proposal

Authors: Alexandra Baicoianu, Patric Stanoiu, Marian Velea, Calin Husar


Current trends involve machine learning techniques, Artificial Intelligence, etc., in most Industry4.0 specific research directions. In the automotive industry, but not only, but methods and new machine learning algorithms also appear in order to mainly shorten the development time of new components and their validation. The aim of this paper is to generate an input set of data, starting from a classic system existing in the Simcenter Amesim platform, use it as input data in a machine learning analysis and validate the new proposed machine learning methodology. This approach seeks to analyze a vehicle suspension model by using an artificial neural network. Essential for this work are the data sets on which the neural network is trained, as these require an exceptional degree of accuracy and robustness for the result to be as close as possible to mathematical calculations. The final aim is to help create a model enabling the prediction of the values of the vehicle’s suspension travel, speed and acceleration.

Keywords: condition monitoring, car suspension, hyper tuning, stochastic gradient descent, neural network builder, sim center amesim

Procedia PDF Downloads 12
2862 Prioritization in Modern Portfolio Management - An Action Design Research Approach to Method Development for Scaled Agility

Authors: Jan-Philipp Schiele, Karsten Schlinkmeier


Allocation of scarce resources is a core process of traditional project portfolio management. However, with the popularity of agile methodology, established concepts and methods of portfolio management are reaching their limits and need to be adapted. Consequently, the question arises of how the process of resource allocation can be managed appropriately in scaled agile environments. The prevailing framework SAFe offers Weightest Shortest Job First (WSJF) as a prioritization technique, butestablished companies are still looking for methodical adaptions to apply WSJF for prioritization in portfolios in a more goal-oriented way and aligned for their needs in practice. In this paper, the relevant problem of prioritization in portfolios is conceptualized from the perspective of coordination and related mechanisms to support resource allocation. Further, an Action Design Research (ADR) project with case studies in a finance company is outlined to develop a practically applicable yet scientifically sound prioritization method based on coordination theory. The ADR project will be flanked by consortium research with various practitioners from the financial and insurance industry. Preliminary design requirements indicate that the use of a feedback loop leads to better team and executive level coordination in the prioritization process.

Keywords: scaled agility, portfolio management, prioritization, business-IT alignment

Procedia PDF Downloads 15
2861 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli


Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 12
2860 Green Synthesis of Copper Oxide and Cobalt Oxide Nanoparticles Using Spinacia Oleracea Leaf Extract

Authors: Yameen Ahmed, Jamshid Hussain, Farman Ullah, Sohaib Asif


The investigation aims at the synthesis of copper oxide and cobalt oxide nanoparticles using Spinacia oleracea leaf extract. These nanoparticles have many properties and applications. They possess antimicrobial catalytic properties and also they can be used in energy storage materials, gas sensors, etc. The Spinacia oleracea leaf extract behaves as a reducing agent in nanoparticle synthesis. The plant extract was first prepared and then treated with copper and cobalt salt solutions to get the precipitate. The salt solutions used for this purpose are copper sulfate pentahydrate (CuSO₄.5H₂O) and cobalt chloride hexahydrate (CoCl₂.6H₂O). The UV-Vis, XRD, EDX, and SEM techniques are used to find the optical, structural, and morphological properties of copper oxide and cobalt oxide nanoparticles. The UV absorption peaks are at 326 nm and 506 nm for copper oxide and cobalt oxide nanoparticles.

Keywords: cobalt oxide, copper oxide, green synthesis, nanoparticles

Procedia PDF Downloads 3
2859 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman


The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 14
2858 Private Inference for Credit Card Fraud Detection Using Deep Learning and Homomorphic Encryption

Authors: Bidisha Mandal, Sameer Ranjan, Prashant Chugh


Privacy preservation is one of the major concerns of today’s world as nowadays, people are more aware of the uses of their personal data. The transition to a digitalized world has accelerated after the pandemic hit the globe in 2020. Everyone started to use online services, and consumers have moved towards digital channels for their requirements. The business models of organizations and industries have also changed gears to provide services. This also increases the rate of misuse of personal data and fraud in different sectors. In this paper, we outline and propose a solution that shall be useful to detect frauds in credit card transactions while safeguarding the privacy of the consumer. The solution employs one of the most promising privacy preservation techniques, Homomorphic Encryption and Deep Learning methods. The proposal attempts to detect fraud by inferring encrypted consumer data in a pre-trained model so that the customer remains to assure about their privacy. So, the main aim of our proposal is to protect the privacy of the customer while customer data is being shared to avail the services of deep learning model and we have successfully achieved this goal with a promising accuracy, which is satisfactory considering the fact that non-linear operations have to be approximated to work with Homomorphic Encryption.

Keywords: privacy preservation, homomorphic encryption (HE), deep learning (DL), private inference, credit card fraud detection

Procedia PDF Downloads 18
2857 Donoho-Stark’s and Hardy’s Uncertainty Principles for the Short-Time Quaternion Offset Linear Canonical Transform

Authors: Mohammad Younus Bhat


The quaternion offset linear canonical transform (QOLCT), which isa time-shifted and frequency-modulated version of the quaternion linear canonical transform (QLCT), provides a more general framework of most existing signal processing tools. For the generalized QOLCT, the classical Heisenberg’s and Lieb’s uncertainty principles have been studied recently. In this paper, we first define the short-time quaternion offset linear canonical transform (ST-QOLCT) and drive its relationship with the quaternion Fourier transform (QFT). The crux of the paper lies in the generalization of several well-known uncertainty principles for the ST-QOLCT, including Donoho-Stark’s uncertainty principle, Hardy’s uncertainty principle, Beurling’s uncertainty principle, and the logarithmic uncertainty principle.

Keywords: Quaternion Fourier transform, Quaternion offset linear canonical transform, short-time quaternion offset linear canonical transform, uncertainty principle

Procedia PDF Downloads 14
2856 Application Layer Distributed Denial of Service Attack Detection Using Machine Learning

Authors: Songyuan Sui, Chen Zhu


Distributed denial of service (DDoS) attacks are regarded as one of the most network threats nowadays. The attackers dominate a large portion of network traffic from multiple nodes to launch the DDoS attacks. With the improvement of a large number of defense methods for the network layer and the increasing popularity of many application scenarios focusing on the application layer, defense against application-layer DDoS attacks is becoming more and more important. This paper presented a literature review about machine learning-based DDoS detection on three popular application scenarios. It illustrated the fact that application-layer DDoS attacks detection is important but overlooked to some extent. We also performed an experimental analysis of five machine learning models for application-layer DDoS detection specifically. These results indicated that application layer servers could use typical machine learning models with fewer resources cost and better performance to detect application-layer DDoS attacks automatically.

Keywords: anomaly detection, application layer, distributed denial of service, machine learning

Procedia PDF Downloads 13
2855 Noise Detection Algorithm for Skin Disease Image Identification

Authors: Minakshi Mainaji Sonawane, Bharti W. Gawali, Sudhir Mendhekar, Ramesh R. Manza


People's lives and health are severely impacted by skin diseases. A new study proposes an effective method for identifying the different forms of skin diseases. Image denoising is a technique for improving image quality after it has been harmed by noise. The proposed technique is based on the usage of the wavelet transform. Wavelet transform is the best method for analyzing the image due to the ability to split the image into the sub-band, which has been used to estimate the noise ratio at the noisy image. According to experimental results, the proposed method presents the best values for MSE, PSNR, and Entropy for denoised images. we can found in Also, by using different types of wavelet transform filters is make the proposed approach can obtain the best results 23.13, 20.08, 50.7 for the image denoising process

Keywords: MSE, PSNR, entropy, Gaussian filter, DWT

Procedia PDF Downloads 31
2854 A Multidimensional Genetic Algorithm Applicable for Our VRP Variant Dealing with the Problems of Infrastructure Defaults SVRDP-CMTW: “Safety Vehicle Routing Diagnosis Problem with Control and Modified Time Windows”

Authors: Ben Mansour Mouin, Elloumi Abdelkarim


We will discuss the problem of routing a fleet of different vehicles from a central depot to different types of infrastructure-defaults with dynamic maintenance requests, modified time windows, and control of default maintained. For this reason, we propose a modified metaheuristicto to solve our mathematical model. SVRDP-CMTW is a variant VRP of an optimal vehicle plan that facilitates the maintenance task of different types of infrastructure-defaults. This task will be monitored after the maintenance, based on its priorities, the degree of danger associated with each default, and the neighborhood at the black-spots. We will present, in this paper, a multidimensional genetic algorithm “MGA” by detailing its characteristics, proposed mechanisms, and roles in our work. The coding of this algorithm represents the necessary parameters that characterize each infrastructure-default with the objective of minimizing a combination of cost, distance and maintenance times while satisfying the priority levels of the most urgent defaults. The developed algorithm will allow the dynamic integration of newly detected defaults at the execution time. This result will be displayed in our programmed interactive system at the routing time. This multidimensional genetic algorithm replaces N genetic algorithm to solve P different type problems of infrastructure defaults (instead of N algorithm for P problem we can solve in one multidimensional algorithm simultaneously who can solve all these problemsatonce).

Keywords: mathematical model, VRP, multidimensional genetic algorithm, metaheuristics

Procedia PDF Downloads 24
2853 A Multigranular Linguistic ARAS Model in Group Decision Making

Authors: Wiem Daoud Ben Amor, Luis Martínez López, Hela Moalla Frikha


Most of the multi-criteria group decision making (MCGDM) problems dealing with qualitative criteria require consideration of the large background of expert information. It is common that experts have different degrees of knowledge for giving their alternative assessments according to criteria. So, it seems logical that they use different evaluation scales to express their judgment, i.e., multi granular linguistic scales. In this context, we propose the extension of the classical additive ratio assessment (ARAS) method to the case of a hierarchical linguistics term for managing multi granular linguistic scales in uncertain contexts where uncertainty is modeled by means in linguistic information. The proposed approach is called the extended hierarchical linguistics-ARAS method (ARAS-ELH). Within the ARAS-ELH approach, the DM can diagnose the results (the ranking of the alternatives) in a decomposed style, i.e., not only at one level of the hierarchy but also at the intermediate ones. Also, the developed approach allows a feedback transformation i.e the collective final results of all experts able to be transformed at any level of the extended linguistic hierarchy that each expert has previously used. Therefore, the ARAS-ELH technique makes it easier for decision-makers to understand the results. Finally, An MCGDM case study is given to illustrate the proposed approach.

Keywords: additive ratio assessment, extended hierarchical linguistic, multi-criteria group decision making problems, multi granular linguistic contexts

Procedia PDF Downloads 30
2852 Sexual Crime Prediction in an African Context

Authors: Olayemi Success Falope, Surendra Thakur


The significant rise of sexual crime around the world and the inability to control this violation has resulted in serious attacks, murders, and injuries to sexual crime victims in many situations. Crime in general, of which sexual crime is a major contributor, has brought about a downgrade in South Africa’s economy. The growing need to mitigate sexual crime across the country prompted this study. Data mining techniques were applied to a sexual crime dataset extracted from the South African crime statistics database on the Kaggle website to visualize sexual crime trends and build a model to predict future sexual crime occurrences, thereby assisting the government and law enforcement agencies with gaining insight into the most common sexual crime hotspots across all nine provinces of South Africa. The model could enable law officials to take more timeous action to curb sexual crimes in the country. This paper focuses on the identification of suitable data analytics algorithms available for sexual crime prediction and then determining the most suitable algorithm for the study. The linear regression and decision tree classifier algorithms were applied to the extracted sexual crime dataset to predict the features responsible for causing sexual crime in South Africa. Of these, linear regression was the most effective algorithm. The researcher found that a linear relationship exists between the dependent variable (sexual crime) and the independent variables (population and density) of this study. Accuracy, precision, recall, and f1 score were used to measure the performance of the decision tree algorithm, while linear regression was measured using the coefficient of determination measured by the R-squared score, which is a key output of regression analysis. The 91% accuracy achieved is an indication of how effective the model will predict sexual crime occurrences.

Keywords: algorithms, data analytics, data mining, decision tree classifier, linear regression, sexual crime prediction, south africa

Procedia PDF Downloads 27
2851 Citizen Science Policy Process in Finland

Authors: Elena T. Svahn


Citizen science is an activity where the general public interacts with scientists, co-producing new knowledge on our world in order to advance science, improve society and well-being of humans. In the best case scenario, citizen science makes impossible possible, for instance, by allowing the collection of massive data sets that would not be possible to collect through any other method. Citizen science also increases the general public’s trust in the scientific process, improves information literacy, and decreases the impact of fake news and disinformation. Taking an active role in the improvement of society and participating in the pertaining discourse empowers citizens and encourages them towards a more active membership in the society. Supranational organisations such as the EU, OECD, and UN, supported by international scientific literature, are calling for citizen science to be used as a method for tackling the global wicked problems making way towards SDG 17s. To that end, the Finnish Open Science coordination is outlining strategic principles, objectives, and action plans to ensure that support for citizen science is offered in organisations, in line with the Declaration for Open Science and Research. The policy is drafted for citizen science under the area of culture for open scholarship. The Working group has been tasked with the drafting of the policy and conducting a survey to map opinions and experiences of citizen scientists, researchers, research organisations, and funders on the topic of citizen science. Aim of this study is to evaluate the citizen science policy process in Finland through the policy cycle notion.

Keywords: citizen science, policy, policy process, policy cycle, finland

Procedia PDF Downloads 30
2850 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei


Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 18
2849 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov


The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm

Procedia PDF Downloads 23