World Academy of Science, Engineering and Technology
[Computer and Information Engineering]
Online ISSN : 1307-6892
2932 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 1852931 Stock Movement Prediction Using Price Factor and Deep Learning
Abstract:
The development of machine learning methods and techniques has opened doors for investigation in many areas such as medicines, economics, finance, etc. One active research area involving machine learning is stock market prediction. This research paper tries to consider multiple techniques and methods for stock movement prediction using historical price or price factors. The paper explores the effectiveness of some deep learning frameworks for forecasting stock. Moreover, an architecture (TimeStock) is proposed which takes the representation of time into account apart from the price information itself. Our model achieves a promising result that shows a potential approach for the stock movement prediction problem.Keywords: classification, machine learning, time representation, stock prediction
Procedia PDF Downloads 1472930 Predicting Student Performance Based on Coding Behavior in STEAMplug
Authors: Giovanni Gonzalez Araujo, Michael Kyrilov, Angelo Kyrilov
Abstract:
STEAMplug is a web-based innovative educational platform which makes teaching easier and learning more effective. It requires no setup, eliminating the barriers to entry, allowing students to focus on their learning throughreal-world development environments. The student-centric tools enable easy collaboration between peers and teachers. Analyzing user interactions with the system enables us to predict student performance and identify at-risk students, allowing early instructor intervention.Keywords: plagiarism detection, identifying at-Risk Students, education technology, e-learning system, collaborative development, learning and teaching with technology
Procedia PDF Downloads 1512929 Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote
Authors: Heru Syah Putra, Aji Tri Pamungkas Nurcahyo, Chuang-Jan Chang
Abstract:
MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes.Keywords: MOIL-SDK, intel realsense T265, Fisheye image, monocular visual odometry
Procedia PDF Downloads 1342928 Image Instance Segmentation Using Modified Mask R-CNN
Authors: Avatharam Ganivada, Krishna Shah
Abstract:
The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets.Keywords: instance segmentation, object detection, convolutional neural networks, deep learning, computer vision
Procedia PDF Downloads 732927 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 1082926 An Ensemble-based Method for Vehicle Color Recognition
Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi
Abstract:
The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network
Procedia PDF Downloads 852925 Anthropomorphic Interfaces For User Trust in a Highly Automated Driving
Authors: Clarisse Lawson-Guidigbe, Nicolas Louveton, Kahina Amokrane-Ferka, Jean-Marc Andre
Abstract:
Trust in automated driving systems is receiving growing attention in the research community. Anthropomorphism has been identified by past research as a trust-building factor. In this paper, we consider three anthropomorphic interfaces integrating three versions of a virtual assistant. We attempt to measure the impact of each of these interfaces on trust in the automated driving system. An experiment following a between-subject design was conducted in a driving simulator (N = 36) to evaluate participants’ performance and experience in two handover situations (a simple one and a critical one). Perception of anthropomorphism and trust was measured using scales, while participants’ experience was measured during elicitation interviews. We found no significant difference between the three interfaces regarding the perception of anthropomorphism, trust levels, or experience. However, regarding participants’ performance, we found a significant difference between the three interfaces in the simple handover situations but not the critical one. Learnings from anthropomorphism and trust measurement scales are discussed and suggestions for further research are proposed.Keywords: highly automated driving, trust, anthropomorphic design, mindful anthropomorphism, mindless anthropomorphism
Procedia PDF Downloads 1472924 Multi-Spectral Deep Learning Models for Forest Fire Detection
Authors: Smitha Haridasan, Zelalem Demissie, Atri Dutta, Ajita Rattani
Abstract:
Aided by the wind, all it takes is one ember and a few minutes to create a wildfire. Wildfires are growing in frequency and size due to climate change. Wildfires and its consequences are one of the major environmental concerns. Every year, millions of hectares of forests are destroyed over the world, causing mass destruction and human casualties. Thus early detection of wildfire becomes a critical component to mitigate this threat. Many computer vision-based techniques have been proposed for the early detection of forest fire using video surveillance. Several computer vision-based methods have been proposed to predict and detect forest fires at various spectrums, namely, RGB, HSV, and YCbCr. The aim of this paper is to propose a multi-spectral deep learning model that combines information from different spectrums at intermediate layers for accurate fire detection. A heterogeneous dataset assembled from publicly available datasets is used for model training and evaluation in this study. The experimental results show that multi-spectral deep learning models could obtain an improvement of about 4.68 % over those based on a single spectrum for fire detection.Keywords: deep learning, forest fire detection, multi-spectral learning, natural hazard detection
Procedia PDF Downloads 2412923 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication
Authors: Rui Mao, Heming Ji, Xiaoyu Wang
Abstract:
Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM
Procedia PDF Downloads 1552922 Improving Grade Control Turnaround Times with In-Pit Hyperspectral Assaying
Authors: Gary Pattemore, Michael Edgar, Andrew Job, Marina Auad, Kathryn Job
Abstract:
As critical commodities become more scarce, significant time and resources have been used to better understand complicated ore bodies and extract their full potential. These challenging ore bodies provide several pain points for geologists and engineers to overcome, poor handling of these issues flows downs stream to the processing plant affecting throughput rates and recovery. Many open cut mines utilise blast hole drilling to extract additional information to feed back into the modelling process. This method requires samples to be collected during or after blast hole drilling. Samples are then sent for assay with turnaround times varying from 1 to 12 days. This method is time consuming, costly, requires human exposure on the bench and collects elemental data only. To address this challenge, research has been undertaken to utilise hyperspectral imaging across a broad spectrum to scan samples, collars or take down hole measurements for minerals and moisture content and grade abundances. Automation of this process using unmanned vehicles and on-board processing reduces human in pit exposure to ensure ongoing safety. On-board processing allows data to be integrated into modelling workflows with immediacy. The preliminary results demonstrate numerous direct and indirect benefits from this new technology, including rapid and accurate grade estimates, moisture content and mineralogy. These benefits allow for faster geo modelling updates, better informed mine scheduling and improved downstream blending and processing practices. The paper presents recommendations for implementation of the technology in open cut mining environments.Keywords: grade control, hyperspectral scanning, artificial intelligence, autonomous mining, machine learning
Procedia PDF Downloads 1132921 Audio-Visual Recognition Based on Effective Model and Distillation
Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin
Abstract:
Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.Keywords: lipreading, audio-visual, Efficientnet, distillation
Procedia PDF Downloads 1342920 Review on Rainfall Prediction Using Machine Learning Technique
Authors: Prachi Desai, Ankita Gandhi, Mitali Acharya
Abstract:
Rainfall forecast is mainly used for predictions of rainfall in a specified area and determining their future rainfall conditions. Rainfall is always a global issue as it affects all major aspects of one's life. Agricultural, fisheries, forestry, tourism industry and other industries are widely affected by these conditions. The studies have resulted in insufficient availability of water resources and an increase in water demand in the near future. We already have a new forecast system that uses the deep Convolutional Neural Network (CNN) to forecast monthly rainfall and climate changes. We have also compared CNN against Artificial Neural Networks (ANN). Machine Learning techniques that are used in rainfall predictions include ARIMA Model, ANN, LR, SVM etc. The dataset on which we are experimenting is gathered online over the year 1901 to 20118. Test results have suggested more realistic improvements than conventional rainfall forecasts.Keywords: ANN, CNN, supervised learning, machine learning, deep learning
Procedia PDF Downloads 2022919 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.Keywords: SQL injection, attacks, web application, accuracy, database
Procedia PDF Downloads 1512918 Cyber-Softbook: A Platform for Collaborative Content Development and Delivery for Cybersecurity Education
Authors: Eniye Tebekaemi, Martin Zhao
Abstract:
The dichotomy between the skills set of newly minted college graduates and the skills required by cybersecurity employers is on the rise. Colleges are struggling to cope with the rapid pace of technology evolution using outdated tools and practices. Industries are getting frustrated due to the need to retrain fresh college graduates on skills they should have acquired. There is a dire need for academic institutions to develop new tools and systems to deliver cybersecurity education to meet the ever-evolving technology demands of the industry. The Cyber-Softbook project’s goal is to bridge the tech industry and tech education gap by providing educators a framework to collaboratively design, manage, and deliver cybersecurity academic courses that meet the needs of the tech industry. The Cyber-Softbook framework, when developed, will provide a platform for academic institutions and tech industries to collaborate on tech education and for students to learn about cybersecurity with all the resources they need to understand concepts and gain valuable skills available on a single platform.Keywords: cybersecurity, education, skills, labs, curriculum
Procedia PDF Downloads 922917 Modeling and Stability Analysis of Viral Propagation in Wireless Mesh Networking
Authors: Haowei Chen, Kaiqi Xiong
Abstract:
This paper aims to answer how malware will propagate in Wireless Mesh Networks (WMNs) and how communication radius and distributed density of nodes affects the process of spreading. The above analysis is essential for devising network-wide strategies to counter malware. We answer these questions by developing an improved dynamical system that models malware propagation in the area where nodes were uniformly distributed. The proposed model captures both the spatial and temporal dynamics regarding the malware spreading process. Equilibrium and stability are also discussed based on the threshold of the system. If the threshold is less than one, the infected nodes disappear, and if the threshold is greater than one, the infected nodes asymptotically stabilize at the endemic equilibrium. Numerical simulations are investigated about communication radius and distributed density of nodes in WMNs, which allows us to draw various insights that can be used to guide security defense.Keywords: Bluetooth security, malware propagation, wireless mesh networks, stability analysis
Procedia PDF Downloads 982916 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text
Procedia PDF Downloads 1152915 Ontology for Cross-Site-Scripting (XSS) Attack in Cybersecurity
Authors: Jean Rosemond Dora, Karol Nemoga
Abstract:
In this work, we tackle a frequent problem that frequently occurs in the cybersecurity field which is the exploitation of websites by XSS attacks, which are nowadays considered a complicated attack. These types of attacks aim to execute malicious scripts in a web browser of the client by including code in a legitimate web page. A serious matter is when a website accepts the “user-input” option. Attackers can exploit the web application (if vulnerable), and then steal sensitive data (session cookies, passwords, credit cards, etc.) from the server and/or from the client. However, the difficulty of the exploitation varies from website to website. Our focus is on the usage of ontology in cybersecurity against XSS attacks, on the importance of the ontology, and its core meaning for cybersecurity. We explain how a vulnerable website can be exploited, and how different JavaScript payloads can be used to detect vulnerabilities. We also enumerate some tools to use for an efficient analysis. We present detailed reasoning on what can be done to improve the security of a website in order to resist attacks, and we provide supportive examples. Then, we apply an ontology model against XSS attacks to strengthen the protection of a web application. However, we note that the existence of ontology does not improve the security itself, but it has to be properly used and should require a maximum of security layers to be taken into account.Keywords: cybersecurity, web application vulnerabilities, cyber threats, ontology model
Procedia PDF Downloads 1722914 Analyse of User Interface Design in Mobile Teaching Apps
Authors: Asma Ashoul
Abstract:
Nowadays, smartphones are playing a major role in our lives, by communicating with family, friends or using them to learn different things in life. Using smartphones to learn and teach today is something common to see in places like schools or colleges. Therefore, thinking about developing an app that teaches Arabic language may help some categories in society to learn a second language. For example, kids under the age of five or older would learn fast by using smartphones. The problem is based on the Arabic language, which is most like to be not used anymore. The developer assumed to develop an app that would help the younger generation on their learning the Arabic language. A research was completed about user interface design to help the developer choose appropriate layouts and designs. Developing the artefact contained different stages. First, analyzing the requirements with the client, which is needed to be developed. Secondly, designing the user interface design based on the literature review. Thirdly, developing and testing the application after it is completed contacting all the tools that have been used. Lastly, evaluation and future recommendation, which contained the overall view about the application followed by the client’s feedback. Gathering the requirements after having client meetings based on the interface design. The project was done following an agile development methodology. Therefore, this methodology helped the developer to manage to finish the work on time.Keywords: developer, application, interface design, layout, Agile, client
Procedia PDF Downloads 1162913 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 1562912 The Relevance of Smart Technologies in Learning
Authors: Rachael Olubukola Afolabi
Abstract:
Immersive technologies known as X Reality or Cross Reality that include virtual reality augmented reality, and mixed reality have pervaded into the education system at different levels from elementary school to adult learning. Instructors, instructional designers, and learning experience specialists continue to find new ways to engage students in the learning process using technology. While the progression of web technologies has enhanced digital learning experiences, analytics on learning outcomes continue to be explored to determine the relevance of these technologies in learning. Digital learning has evolved from web 1.0 (static) to 4.0 (dynamic and interactive), and this evolution of technologies has also advanced teaching methods and approaches. This paper explores how these technologies are being utilized in learning and the results that educators and learners have identified as effective learning opportunities and approaches.Keywords: immersive technologoes, virtual reality, augmented reality, technology in learning
Procedia PDF Downloads 1452911 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 1572910 A Named Data Networking Stack for Contiki-NG-OS
Authors: Sedat Bilgili, Alper K. Demir
Abstract:
The current Internet has become the dominant use with continuing growth in the home, medical, health, smart cities and industrial automation applications. Internet of Things (IoT) is an emerging technology to enable such applications in our lives. Moreover, Named Data Networking (NDN) is also emerging as a Future Internet architecture where it fits the communication needs of IoT networks. The aim of this study is to provide an NDN protocol stack implementation running on the Contiki operating system (OS). Contiki OS is an OS that is developed for constrained IoT devices. In this study, an NDN protocol stack that can work on top of IEEE 802.15.4 link and physical layers have been developed and presented.Keywords: internet of things (IoT), named-data, named data networking (NDN), operating system
Procedia PDF Downloads 1712909 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1342908 Identity-Based Encryption: A Comparison of Leading Classical and Post-Quantum Implementations in an Enterprise Setting
Authors: Emily Stamm, Neil Smyth, Elizabeth O'Sullivan
Abstract:
In Identity-Based Encryption (IBE), an identity, such as a username, email address, or domain name, acts as the public key. IBE consolidates the PKI by eliminating the repetitive process of requesting public keys for each message encryption. Two of the most popular schemes are Sakai-Kasahara (SAKKE), which is based on elliptic curve pairings, and the Ducas, Lyubashevsky, and Prest lattice scheme (DLP- Lattice), which is based on quantum-secure lattice cryptography. In or- der to embed the schemes in a standard enterprise setting, both schemes are implemented as shared system libraries and integrated into a REST service that functions at the enterprise level. The performance of both schemes as libraries and services is compared, and the practicalities of implementation and application are discussed. Our performance results indicate that although SAKKE has the smaller key and ciphertext sizes, DLP-Lattice is significantly faster overall and we recommend it for most enterprise use cases.Keywords: identity-based encryption, post-quantum cryptography, lattice-based cryptography, IBE
Procedia PDF Downloads 1362907 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending
Authors: Enrico Dorigatti
Abstract:
The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.Keywords: circuit bending, ecology, sound art, sustainability
Procedia PDF Downloads 1712906 Digital Platforms: Creating Value through Network Effects under Pandemic Conditions
Authors: S. Łęgowik-Świącik
Abstract:
This article is a contribution to the research into the determinants of value creation via digital platforms in variable operating conditions. The dynamics of the market environment caused by the COVID-19 pandemic have made enterprises built on digital platforms financially successful. While many classic companies are struggling with the uncertainty of conducting a business and difficulties in the process of value creation, digital platforms create value by modifying the existing business model to meet the changing needs of customers. Therefore, the objective of this publication is to understand and explain the relationship between value creation and the conversion of the business model built on digital platforms under pandemic conditions. The considerations relating to the conceptual framework and determining the research objective allowed for adopting the hypothesis, assuming that the processes of value creation are evolving, and the measurement of these processes allows for the protection of value created and enables its growth in changing circumstances. The research methods, such as critical literature analysis and case study, were applied to accomplish the objective pursued and verify the hypothesis formulated. The empirical research was carried out based on the data from enterprises listed on the Nasdaq Stock Exchange: Amazon, Alibaba, and Facebook. The research period was the years 2018-2021. The surveyed enterprises were chosen based on the targeted selection. The problem discussed is important and current since the lack of in-depth theoretical research results in few attempts to identify the determinants of value creation via digital platforms. The above arguments led to an attempt at theoretical analysis and empirical research to fill in the gap perceived by deepening the understanding of the process of value creation through network effects via digital platforms under pandemic conditions.Keywords: business model, digital platforms, enterprise management, pandemic conditions, value creation process
Procedia PDF Downloads 1282905 A Comprehensive Survey and Improvement to Existing Privacy Preserving Data Mining Techniques
Authors: Tosin Ige
Abstract:
Ethics must be a condition of the world, like logic. (Ludwig Wittgenstein, 1889-1951). As important as data mining is, it possess a significant threat to ethics, privacy, and legality, since data mining makes it difficult for an individual or consumer (in the case of a company) to control the accessibility and usage of his data. This research focuses on Current issues and the latest research and development on Privacy preserving data mining methods as at year 2022. It also discusses some advances in those techniques while at the same time highlighting and providing a new technique as a solution to an existing technique of privacy preserving data mining methods. This paper also bridges the wide gap between Data mining and the Web Application Programing Interface (web API), where research is urgently needed for an added layer of security in data mining while at the same time introducing a seamless and more efficient way of data mining.Keywords: data, privacy, data mining, association rule, privacy preserving, mining technique
Procedia PDF Downloads 1732904 Cartography through Picasso’s Eyes
Authors: Desiree Di Marco
Abstract:
The aim of this work is to show through the lens of art first which kind of reality was the one represented through fascist maps, and second to study the impact of the fascist regime’s cartography (FRC) on observers eye’s. In this study, it is assumed that the FRC’s representation of reality was simplified, timeless, and even a-spatial because it underrates the concept of territoriality. Cubism and Picasso’s paintings will be used as counter-examples to mystify fascist cartography’s ideological assumptions. The difference between the gaze of an observer looking at the surface of a fascist map and the gaze of someone observing a Picasso painting is impressive. Because there is always something dark, hidden, behind and inside a map, the world of fascist maps was a world built starting from the observation of a “window” that distorted reality and trapped the eyes of the observers. Moving across the map, they seem as if they were hypnotized. Cartohypnosis is the state in which the observer finds himself enslaved by the attractive force of the map, which uses a sort of “magic” geography, a geography that, by means of symbolic language, never has as its primary objective the attempt to show us reality in its complexity, but that of performing for its audience. Magical geography and hypnotic cartography in fascism blended together, creating an almost mystical, magical relationship that demystified reality to reduce the world to a conquerable space. This reduction offered the observer the possibility of conceiving new dimensions: of the limit, of the boundary, elements with which the subject felt fully involved and in which the aesthetic force of the images demonstrated all its strength. But in the early 20th century, the combination of art and cartography gave rise to new possibilities. Cubism which, more than all the other artistic currents showed us how much the observation of reality from a single point of view falls within dangerous logic, is an example. Cubism was an artistic movement that brought about a profound transformation in pictorial culture. It was not only a revolution of pictorial space, but it was a revolution of our conception of pictorial space. Up until that time, men and women were more inclined to believe in the power of images and their representations. Cubist painters rebelled against this blindness by claiming that art must always offer an alternative. Indeed the contribution of this work is precisely to show how art can be able to provide alternatives to even the most horrible regimes and the most atrocious human misfortunes. It also enriches the field of cartography because it "reassures" it by showing how much good it can be for cartography if also for other disciplines come close. Only in this way researcher can increase the chances for the cartography of a greater diffusion at the academic level.Keywords: cartography, Picasso, fascism, culture
Procedia PDF Downloads 642903 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector
Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar
Abstract:
Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability
Procedia PDF Downloads 184