Search results for: adversarial attack
710 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 196709 Non-Targeted Adversarial Image Classification Attack-Region Modification Methods
Authors: Bandar Alahmadi, Lethia Jackson
Abstract:
Machine Learning model is used today in many real-life applications. The safety and security of such model is important, so the results of the model are as accurate as possible. One challenge of machine learning model security is the adversarial examples attack. Adversarial examples are designed by the attacker to cause the machine learning model to misclassify the input. We propose a method to generate adversarial examples to attack image classifiers. We are modifying the successfully classified images, so a classifier misclassifies them after the modification. In our method, we do not update the whole image, but instead we detect the important region, modify it, place it back to the original image, and then run it through a classifier. The algorithm modifies the detected region using two methods. First, it will add abstract image matrix on back of the detected image matrix. Then, it will perform a rotation attack to rotate the detected region around its axes, and embed the trace of image in image background. Finally, the attacked region is placed in its original position, from where it was removed, and a smoothing filter is applied to smooth the background with foreground. We test our method in cascade classifier, and the algorithm is efficient, the classifier confident has dropped to almost zero. We also try it in CNN (Convolutional neural network) with higher setting and the algorithm was successfully worked.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 340708 A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System
Authors: Arshia Aflaki, Hadis Karimipour, Anik Islam
Abstract:
Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent.Keywords: generative adversarial attack, deep reinforcement learning, deep learning, IIoT, generative adversarial networks, power system
Procedia PDF Downloads 42707 Literature Review: Adversarial Machine Learning Defense in Malware Detection
Authors: Leidy M. Aldana, Jorge E. Camargo
Abstract:
Adversarial Machine Learning has gained importance in recent years as Cybersecurity has gained too, especially malware, it has affected different entities and people in recent years. This paper shows a literature review about defense methods created to prevent adversarial machine learning attacks, firstable it shows an introduction about the context and the description of some terms, in the results section some of the attacks are described, focusing on detecting adversarial examples before coming to the machine learning algorithm and showing other categories that exist in defense. A method with five steps is proposed in the method section in order to define a way to make the literature review; in addition, this paper summarizes the contributions in this research field in the last seven years to identify research directions in this area. About the findings, the category with least quantity of challenges in defense is the Detection of adversarial examples being this one a viable research route with the adaptive approach in attack and defense.Keywords: Malware, adversarial, machine learning, defense, attack
Procedia PDF Downloads 72706 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method
Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson
Abstract:
Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 193705 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags
Authors: Zhang Shuqi, Liu Dan
Abstract:
For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation
Procedia PDF Downloads 106704 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs
Authors: Dingyang Hu, Dan Liu
Abstract:
DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.Keywords: adversarial sample, gradient, probability, black box
Procedia PDF Downloads 105703 Towards an Adversary-Aware ML-Based Detector of Spam on Twitter Hashtags
Authors: Niddal Imam, Vassilios G. Vassilakis
Abstract:
After analysing messages posted by health-related spam campaigns in Twitter Arabic hashtags, we found that these campaigns use unique hijacked accounts (we call them adversarial hijacked accounts) as adversarial examples to fool deployed ML-based spam detectors. Existing ML-based models build a behaviour profile for each user to detect hijacked accounts. This approach is not applicable for detecting spam in Twitter hashtags since they are computationally expensive. Hence, we propose an adversary-aware ML-based detector, which includes a newly designed feature (avg posts) to improve the detection of spam tweets posted by the adversarial hijacked accounts at a tweet-level in trending hashtags. The proposed detector was designed considering three key points: robustness, adaptability, and interpretability. The new feature leverages the account’s temporal patterns (i.e., account age and number of posts). It is faster to compute compared to features discussed in the literature and improves the accuracy of detecting the identified hijacked accounts by 73%.Keywords: Twitter spam detection, adversarial examples, evasion attack, adversarial concept drift, account hijacking, trending hashtag
Procedia PDF Downloads 80702 Deep Reinforcement Learning and Generative Adversarial Networks Approach to Thwart Intrusions and Adversarial Attacks
Authors: Fabrice Setephin Atedjio, Jean-Pierre Lienou, Frederica F. Nelson, Sachin S. Shetty, Charles A. Kamhoua
Abstract:
Malicious users exploit vulnerabilities in computer systems, significantly disrupting their performance and revealing the inadequacies of existing protective solutions. Even machine learning-based approaches, designed to ensure reliability, can be compromised by adversarial attacks that undermine their robustness. This paper addresses two critical aspects of enhancing model reliability. First, we focus on improving model performance and robustness against adversarial threats. To achieve this, we propose a strategy by harnessing deep reinforcement learning. Second, we introduce an approach leveraging generative adversarial networks to counter adversarial attacks effectively. Our results demonstrate substantial improvements over previous works in the literature, with classifiers exhibiting enhanced accuracy in classification tasks, even in the presence of adversarial perturbations. These findings underscore the efficacy of the proposed model in mitigating intrusions and adversarial attacks within the machine-learning landscape.Keywords: machine learning, reliability, adversarial attacks, deep-reinforcement learning, robustness
Procedia PDF Downloads 14701 MULTI-FLGANs: Multi-Distributed Adversarial Networks for Non-Independent and Identically Distributed Distribution
Authors: Akash Amalan, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang
Abstract:
Federated learning is an emerging concept in the domain of distributed machine learning. This concept has enabled General Adversarial Networks (GANs) to benefit from the rich distributed training data while preserving privacy. However, in a non-IID setting, current federated GAN architectures are unstable, struggling to learn the distinct features, and vulnerable to mode collapse. In this paper, we propose an architecture MULTI-FLGAN to solve the problem of low-quality images, mode collapse, and instability for non-IID datasets. Our results show that MULTI-FLGAN is four times as stable and performant (i.e., high inception score) on average over 20 clients compared to baseline FLGAN.Keywords: federated learning, generative adversarial network, inference attack, non-IID data distribution
Procedia PDF Downloads 161700 A Grey-Box Text Attack Framework Using Explainable AI
Authors: Esther Chiramal, Kelvin Soh Boon Kai
Abstract:
Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.Keywords: BERT, explainable AI, Grey-box text attack, transformer
Procedia PDF Downloads 138699 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study
Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple
Abstract:
There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection
Procedia PDF Downloads 159698 Comprehensive Review of Adversarial Machine Learning in PDF Malware
Authors: Preston Nabors, Nasseh Tabrizi
Abstract:
Portable Document Format (PDF) files have gained significant popularity for sharing and distributing documents due to their universal compatibility. However, the widespread use of PDF files has made them attractive targets for cybercriminals, who exploit vulnerabilities to deliver malware and compromise the security of end-user systems. This paper reviews notable contributions in PDF malware detection, including static, dynamic, signature-based, and hybrid analysis. It presents a comprehensive examination of PDF malware detection techniques, focusing on the emerging threat of adversarial sampling and the need for robust defense mechanisms. The paper highlights the vulnerability of machine learning classifiers to evasion attacks. It explores adversarial sampling techniques in PDF malware detection to produce mimicry and reverse mimicry evasion attacks, which aim to bypass detection systems. Improvements for future research are identified, including accessible methods, applying adversarial sampling techniques to malicious payloads, evaluating other models, evaluating the importance of features to malware, implementing adversarial defense techniques, and conducting comprehensive examination across various scenarios. By addressing these opportunities, researchers can enhance PDF malware detection and develop more resilient defense mechanisms against adversarial attacks.Keywords: adversarial attacks, adversarial defense, adversarial machine learning, intrusion detection, PDF malware, malware detection, malware detection evasion
Procedia PDF Downloads 40697 A Reasoning Method of Cyber-Attack Attribution Based on Threat Intelligence
Authors: Li Qiang, Yang Ze-Ming, Liu Bao-Xu, Jiang Zheng-Wei
Abstract:
With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain and evidence chain of cyber-attack on threat intelligence platform through data calculation, analysis and reasoning. Then, we used a number of cyber-attack events which we have observed and analyzed to test the reasoning method and demo system, the result of testing indicates that the reasoning method can provide certain help in cyber-attack attribution.Keywords: reasoning, Bayesian networks, cyber-attack attribution, Kill Chain, threat intelligence
Procedia PDF Downloads 452696 Studying Relationship between Local Geometry of Decision Boundary with Network Complexity for Robustness Analysis with Adversarial Perturbations
Authors: Tushar K. Routh
Abstract:
If inputs are engineered in certain manners, they can influence deep neural networks’ (DNN) performances by facilitating misclassifications, a phenomenon well-known as adversarial attacks that question networks’ vulnerability. Recent studies have unfolded the relationship between vulnerability of such networks with their complexity. In this paper, the distinctive influence of additional convolutional layers at the decision boundaries of several DNN architectures was investigated. Here, to engineer inputs from widely known image datasets like MNIST, Fashion MNIST, and Cifar 10, we have exercised One Step Spectral Attack (OSSA) and Fast Gradient Method (FGM) techniques. The aftermaths of adding layers to the robustness of the architectures have been analyzed. For reasoning, separation width from linear class partitions and local geometry (curvature) near the decision boundary have been examined. The result reveals that model complexity has significant roles in adjusting relative distances from margins, as well as the local features of decision boundaries, which impact robustness.Keywords: DNN robustness, decision boundary, local curvature, network complexity
Procedia PDF Downloads 76695 Generative AI: A Comparison of Conditional Tabular Generative Adversarial Networks and Conditional Tabular Generative Adversarial Networks with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault
Authors: Lakshmi Prayaga, Chandra Prayaga. Aaron Wade, Gopi Shankar Mallu, Harsha Satya Pola
Abstract:
Synthetic data generated by Generative Adversarial Networks and Autoencoders is becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and Mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic GAN, such as the Gaussian copula. We present the results from two synthetic data sets (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the ROC and AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN.Keywords: synthetic data generation, generative adversarial networks, conditional tabular GAN, Gaussian copula
Procedia PDF Downloads 84694 Mathematical Based Forecasting of Heart Attack
Authors: Razieh Khalafi
Abstract:
Myocardial infarction (MI) or acute myocardial infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analyzing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behavior of these signals were checked. Results shows this methodology can forecast the ECG and accordingly heart attack with high accuracy.Keywords: heart attack, ECG, random walk, correlation dimension, forecasting
Procedia PDF Downloads 543693 A New Mathematical Method for Heart Attack Forecasting
Authors: Razi Khalafi
Abstract:
Myocardial Infarction (MI) or acute Myocardial Infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analysing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behaviour of these signals were checked. Results show this methodology can forecast the ECG and accordingly heart attack with high accuracy.Keywords: heart attack, ECG, random walk, correlation dimension, forecasting
Procedia PDF Downloads 507692 Intelligent System for Diagnosis Heart Attack Using Neural Network
Authors: Oluwaponmile David Alao
Abstract:
Misdiagnosis has been the major problem in health sector. Heart attack has been one of diseases that have high level of misdiagnosis recorded on the part of physicians. In this paper, an intelligent system has been developed for diagnosis of heart attack in the health sector. Dataset of heart attack obtained from UCI repository has been used. This dataset is made up of thirteen attributes which are very vital in diagnosis of heart disease. The system is developed on the multilayer perceptron trained with back propagation neural network then simulated with feed forward neural network and a recognition rate of 87% was obtained which is a good result for diagnosis of heart attack in medical field.Keywords: heart attack, artificial neural network, diagnosis, intelligent system
Procedia PDF Downloads 656691 Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
Authors: Hamed Alqahtani, Manolya Kavakli-Thorne
Abstract:
The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.Keywords: disentanglement, face detection, generative adversarial networks, video surveillance
Procedia PDF Downloads 130690 Reliable and Energy-Aware Data Forwarding under Sink-Hole Attack in Wireless Sensor Networks
Authors: Ebrahim Alrashed
Abstract:
Wireless sensor networks are vulnerable to attacks from adversaries attempting to disrupt their operations. Sink-hole attacks are a type of attack where an adversary node drops data forwarded through it and hence affecting the reliability and accuracy of the network. Since sensor nodes have limited battery power, it is essential that any solution to the sinkhole attack problem be very energy-aware. In this paper, we present a reliable and energy efficient scheme to forward data from source nodes to the base station while under sink-hole attack. The scheme also detects sink-hole attack nodes and avoid paths that includes them.Keywords: energy-aware routing, reliability, sink-hole attack, WSN
Procedia PDF Downloads 398689 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 114688 An Attack on the Lucas Based El-Gamal Cryptosystem in the Elliptic Curve Group Over Finite Field Using Greater Common Divisor
Authors: Lee Feng Koo, Tze Jin Wong, Pang Hung Yiu, Nik Mohd Asri Nik Long
Abstract:
Greater common divisor (GCD) attack is an attack that relies on the polynomial structure of the cryptosystem. This attack required two plaintexts differ from a fixed number and encrypted under same modulus. This paper reports a security reaction of Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field under GCD attack. Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field was exposed mathematically to the GCD attack using GCD and Dickson polynomial. The result shows that the cryptanalyst is able to get the plaintext without decryption by using GCD attack. Thus, the study concluded that it is highly perilous when two plaintexts have a slight difference from a fixed number in the same Elliptic curve group over finite field.Keywords: decryption, encryption, elliptic curve, greater common divisor
Procedia PDF Downloads 256687 Cross Site Scripting (XSS) Attack and Automatic Detection Technology Research
Authors: Tao Feng, Wei-Wei Zhang, Chang-Ming Ding
Abstract:
Cross-site scripting (XSS) is one of the most popular WEB Attacking methods at present, and also one of the most risky web attacks. Because of the population of JavaScript, the scene of the cross site scripting attack is also gradually expanded. However, since the web application developers tend to only focus on functional testing and lack the awareness of the XSS, which has made the on-line web projects exist many XSS vulnerabilities. In this paper, different various techniques of XSS attack are analyzed, and a method automatically to detect it is proposed. It is easy to check the results of vulnerability detection when running it as a plug-in.Keywords: XSS, no target attack platform, automatic detection,XSS detection
Procedia PDF Downloads 405686 Cryptographic Attack on Lucas Based Cryptosystems Using Chinese Remainder Theorem
Authors: Tze Jin Wong, Lee Feng Koo, Pang Hung Yiu
Abstract:
Lenstra’s attack uses Chinese remainder theorem as a tool and requires a faulty signature to be successful. This paper reports on the security responses of fourth and sixth order Lucas based (LUC4,6) cryptosystem under the Lenstra’s attack as compared to the other two Lucas based cryptosystems such as LUC and LUC3 cryptosystems. All the Lucas based cryptosystems were exposed mathematically to the Lenstra’s attack using Chinese Remainder Theorem and Dickson polynomial. Result shows that the possibility for successful Lenstra’s attack is less against LUC4,6 cryptosystem than LUC3 and LUC cryptosystems. Current study concludes that LUC4,6 cryptosystem is more secure than LUC and LUC3 cryptosystems in sustaining against Lenstra’s attack.Keywords: Lucas sequence, Dickson polynomial, faulty signature, corresponding signature, congruence
Procedia PDF Downloads 166685 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks
Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez
Abstract:
In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.Keywords: attacker, defender, game theory, information
Procedia PDF Downloads 469684 11-Round Impossible Differential Attack on Midori64
Authors: Zhan Chen, Wenquan Bi
Abstract:
This paper focuses on examining the strength of Midori against impossible differential attack. The Midori family of light weight block cipher orienting to energy-efficiency is proposed in ASIACRYPT2015. Using a 6-round property, the authors implement an 11-round impossible differential attack on Midori64 by extending two rounds on the top and three rounds on the bottom. There is enough key space to consider pre-whitening keys in this attack. An impossible differential path that minimises the key bits involved is used to reduce computational complexity. Several additional observations such as partial abort technique are used to further reduce data and time complexities. This attack has data complexity of 2 ⁶⁹·² chosen plaintexts, requires 2 ¹⁴·⁵⁸ blocks of memory and 2 ⁹⁴·⁷ 11- round Midori64 encryptions.Keywords: cryptanalysis, impossible differential, light weight block cipher, Midori
Procedia PDF Downloads 277683 External Sulphate Attack: Advanced Testing and Performance Specifications
Authors: G. Massaad, E. Roziere, A. Loukili, L. Izoret
Abstract:
Based on the monitoring of mass, hydrostatic weighing, and the amount of leached OH- we deduced the nature of leached and precipitated minerals, the amount of lost aggregates and the evolution of porosity and cracking during the sulphate attack. Using these information, we are able to draw the volume / mass changes brought by mineralogical variations and cracking of the cement matrix. Then we defined a new performance indicator, the averaged density, capable to resume along the test of sulphate attack the occurred physicochemical variation occurred in the cementitious matrix and then highlight.Keywords: monitoring strategy, performance indicator, sulphate attack, mechanism of degradation
Procedia PDF Downloads 323682 The Liberal Tension of the Adversarial Criminal Procedure
Authors: Benjamin Newman
Abstract:
The picture of an adverse contest between two parties has often been used as an archetypal description of the Anglo-American adversarial criminal trial. However, in actuality, guilty pleas and plea-bargains have been dominating the procedure for over the last half-a-century. Characterised by two adverse parties, the court adjudicative system in the Anglo-American world adhere to the adversarial procedure, and while further features have been attributed and the values that are embedded within the procedure vary, it is a system that we have no adequate theory. Damaska had argued that the adversarial conflict-resolution mode of administration of justice stems from a liberal laissez-faire concept of a value neutral liberal state. Having said that, the court’s neutrality has been additionally rationalised in light of its liberal end as a safeguard from the state’s coercive force. Both conceptions of the court’s neutrality conflict in cases where the by-standing role disposes of its liberal duty in safeguarding the individual. Such is noticeable in plea bargains, where the defendant has the liberty to plead guilty, despite concerns over wrongful convictions and deprivation of liberty. It is an inner liberal tension within the notion of criminal adversarialism, between the laissez-faire mode which grants autonomy to the parties and the safeguarding liberal end of the trial. Langbein had asserted that the adversarial system is a criminal procedure for which we have no adequate theory, and it is by reference to political and moral theories that the research aims to articulate a normative account. The paper contemplates on the above liberal-tension, and by reference to Duff’s ‘calling-to-account’ theory, argues that autonomy is of inherent value to the criminal process, being considered a constitutive element in the process of being called to account. While the aspiration is that the defendant’s guilty plea should be genuine, the guilty-plea decision must be voluntary if it is to be considered a performative act of accountability. Thus, by valuing procedural autonomy as a necessary element within the criminal adjudicative process, it assimilates a liberal procedure, whilst maintaining the liberal end by holding the defendant to account.Keywords: liberal theory, adversarial criminal procedure, criminal law theory, liberal perfectionism, political liberalism
Procedia PDF Downloads 92681 A Survey on Countermeasures of Cache-Timing Attack on AES Systems
Authors: Settana M. Abdulh, Naila A. Sadalla, Yaseen H. Taha, Howaida Elshoush
Abstract:
Side channel attacks are based on side channel information, which is information that is leaked from encryption systems. This includes timing information, power consumption as well as electromagnetic or even sound leaking which can exploited by an attacker. Implementing side channel attacks are possible if and only if an attacker has access to a cryptosystem. In this case, the attacker can exploit bad implementation in software or hardware which is not controlled by encryption implementer. Thus, he/she will represent a real threat to the security system. Several countermeasures have been proposed to eliminate side channel information vulnerability.Cache timing attack is a special type of side channel attack. Here, timing information is collected and analyzed by an attacker to guess sensitive information such as encryption key or plaintext. This paper reviews the technique applied in this attack and surveys the countermeasures against it, evaluating the feasibility and usability of each. Based on this evaluation, finally we pose several recommendations about using these countermeasures.Keywords: AES algorithm, side channel attack, cache timing attack, cache timing countermeasure
Procedia PDF Downloads 300