Abstracts | Computer and Information Engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3598

World Academy of Science, Engineering and Technology

[Computer and Information Engineering]

Online ISSN : 1307-6892

3538 Reconfigurable Intelligent Surfaces (RIS)-Assisted Integrated Leo Satellite and UAV for Non-terrestrial Networks Using a Deep Reinforcement Learning Approach

Authors: Tesfaw Belayneh Abebe

Abstract:

Integrating low-altitude earth orbit (LEO) satellites and unmanned aerial vehicles (UAVs) within a non-terrestrial network (NTN) with the assistance of reconfigurable intelligent surfaces (RIS), we investigate the problem of how to enhance throughput through integrated LEO satellites and UAVs with the assistance of RIS. We propose a method to jointly optimize the associations with the LEO satellite, the 3D trajectory of the UAV, and the phase shifts of the RIS to maximize communication throughput for RIS-assisted integrated LEO satellite and UAV-enabled wireless communications, which is challenging due to the time-varying changes in the position of the LEO satellite, the high mobility of UAVs, an enormous number of possible control actions, and also the large number of RIS elements. Utilizing a multi-agent double deep Q-network (MADDQN), our approach dynamically adjusts LEO satellite association, UAV positioning, and RIS phase shifts. Simulation results demonstrate that our method significantly outperforms baseline strategies in maximizing throughput. Lastly, thanks to the integrated network and the RIS, the proposed scheme achieves up to 65.66x higher peak throughput and 25.09x higher worst-case throughput.

Keywords: integrating low-altitude earth orbit (LEO) satellites, unmanned aerial vehicles (UAVs) within a non-terrestrial network (NTN), reconfigurable intelligent surfaces (RIS), multi-agent double deep Q-network (MADDQN)

Procedia PDF Downloads 23
3537 Optimization and Automation of Functional Testing with White-Box Testing Method

Authors: Reyhaneh Soltanshah, Hamid R. Zarandi

Abstract:

In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.

Keywords: embedded software, reduce costs, software testing, white-box testing

Procedia PDF Downloads 27
3536 Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal

Authors: Belayneh Matebie, Michael Melese

Abstract:

The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.

Keywords: Teff, ensemble learning, max-voting, CNN, SVM, RF

Procedia PDF Downloads 28
3535 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 23
3534 IoT Based Smart Car Parking System Using Node Red

Authors: Armel Asongu Nkembi, Ahmad Fawad

Abstract:

In this paper, we design a smart car parking system using the Node-Red interface, which enables the user to find the nearest parking area from his current location and gives the availability of parking slots in that respective parking area. The closest parking area is determined by sending an HTTP request to an API, and the shortest distance is computed using some mathematical formulations based on the coordinates retrieved. There is also the use of IR sensors to signal the availability or lack of available parking lots within any parking area. The aim is to reduce the time and effort needed to find empty parking lots and also avoid unnecessary traveling through filled parking lots in a parking area. Thus, it reduces fuel consumption, which in turn reduces carbon footprints in the atmosphere and, overall, makes the city much smarter.

Keywords: node-red, smart parking system, API, http request, IR sensors, Internet of Things, smart city, parking lots.

Procedia PDF Downloads 19
3533 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 24
3532 Improving Machine Learning Translation of Hausa Using Named Entity Recognition

Authors: Aishatu Ibrahim Birma, Aminu Tukur, Abdulkarim Abbass Gora

Abstract:

Machine translation plays a vital role in the Field of Natural Language Processing (NLP), breaking down language barriers and enabling communication across diverse communities. In the context of Hausa, a widely spoken language in West Africa, mainly in Nigeria, effective translation systems are essential for enabling seamless communication and promoting cultural exchange. However, due to the unique linguistic characteristics of Hausa, accurate translation remains a challenging task. The research proposes an approach to improving the machine learning translation of Hausa by integrating Named Entity Recognition (NER) techniques. Named entities, such as person names, locations, organizations, and dates, are critical components of a language's structure and meaning. Incorporating NER into the translation process can enhance the quality and accuracy of translations by preserving the integrity of named entities and also maintaining consistency in translating entities (e.g., proper names), and addressing the cultural references specific to Hausa. The NER will be incorporated into Neural Machine Translation (NMT) for the Hausa to English Translation.

Keywords: machine translation, natural language processing (NLP), named entity recognition (NER), neural machine translation (NMT)

Procedia PDF Downloads 23
3531 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 26
3530 User Intention Generation with Large Language Models Using Chain-of-Thought Prompting Title

Authors: Gangmin Li, Fan Yang

Abstract:

Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behaviour data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.

Keywords: personalized recommendation, generative user modelling, user intention identification, large language models, chain-of-thought prompting

Procedia PDF Downloads 31
3529 Scaling Siamese Neural Network for Cross-Domain Few Shot Learning in Medical Imaging

Authors: Jinan Fiaidhi, Sabah Mohammed

Abstract:

Cross-domain learning in the medical field is a research challenge as many conditions, like in oncology imaging, use different imaging modalities. Moreover, in most of the medical learning applications, the sample training size is relatively small. Although few-shot learning (FSL) through the use of a Siamese neural network was able to be trained on a small sample with remarkable accuracy, FSL fails to be effective for use in multiple domains as their convolution weights are set for task-specific applications. In this paper, we are addressing this problem by enabling FSL to possess the ability to shift across domains by designing a two-layer FSL network that can learn individually from each domain and produce a shared features map with extra modulation to be used at the second layer that can recognize important targets from mix domains. Our initial experimentations based on mixed medical datasets like the Medical-MNIST reveal promising results. We aim to continue this research to perform full-scale analytics for testing our cross-domain FSL learning.

Keywords: Siamese neural network, few-shot learning, meta-learning, metric-based learning, thick data transformation and analytics

Procedia PDF Downloads 38
3528 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 22
3527 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 23
3526 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 21
3525 A Study of User Awareness and Attitudes Towards Civil-ID Authentication in Oman’s Electronic Services

Authors: Raya Al Khayari, Rasha Al Jassim, Muna Al Balushi, Fatma Al Moqbali, Said El Hajjar

Abstract:

This study utilizes linear regression analysis to investigate the correlation between user account passwords and the probability of civil ID exposure, offering statistical insights into civil ID security. The study employs multiple linear regression (MLR) analysis to further investigate the elements that influence consumers’ views of civil ID security. This aims to increase awareness and improve preventive measures. The results obtained from the MLR analysis provide a thorough comprehension and can guide specific educational and awareness campaigns aimed at promoting improved security procedures. In summary, the study’s results offer significant insights for improving existing security measures and developing more efficient tactics to reduce risks related to civil ID security in Oman. By identifying key factors that impact consumers’ perceptions, organizations can tailor their strategies to address vulnerabilities effectively. Additionally, the findings can inform policymakers on potential regulatory changes to enhance civil ID security in the country.

Keywords: civil-id disclosure, awareness, linear regression, multiple regression

Procedia PDF Downloads 39
3524 The Effects of Prolonged Social Media Use on Student Health: A Focus on Computer Vision Syndrome, Hand Pain, and Headaches and Mental Status

Authors: Augustine Ndudi Egere, Shehu Adamu, Esther Ishaya Solomon

Abstract:

As internet accessibility and smartphones continue to increase in Nigeria, Africa’s most populous country, social media platforms have become ubiquitous, causing students of 18-25 age brackets to spend more time on social media. The research investigated the impact of prolonged social media use on the physical health of students, with a specific focus on computer vision syndrome, hand pain, headaches and mental status. The study adopted a mixed-methods approach combining quantitative surveys to gather statistical data on usage patterns and symptoms, along with qualitative interviews into the experiences and perceptions of medical practitioners concerning cases under study within the geopolitical region. The result was analyzed using Regression analysis. It was observed that there is a significant correlation between social media usage by the students in the study age bracket concerning computer vision syndrome, hand pain, headache and general mental status. The research concluded by providing valuable insights into potential interventions and strategies to mitigate the adverse effects of excessive social media use on student well-being and recommends, among others, that educational institutions, parents, and students themselves collaborate to implement strategies aimed at promoting responsible and balanced use of social media.

Keywords: social media, student health, computer vision syndrome, hand pain, headaches, mental staus

Procedia PDF Downloads 31
3523 The Synergistic Effects of Blockchain and AI on Enhancing Data Integrity and Decision-Making Accuracy in Smart Contracts

Authors: Sayor Ajfar Aaron, Sajjat Hossain Abir, Ashif Newaz, Mushfiqur Rahman

Abstract:

Investigating the convergence of blockchain technology and artificial intelligence, this paper examines their synergistic effects on data integrity and decision-making within smart contracts. By implementing AI-driven analytics on blockchain-based platforms, the research identifies improvements in automated contract enforcement and decision accuracy. The paper presents a framework that leverages AI to enhance transparency and trust while blockchain ensures immutable record-keeping, culminating in significantly optimized operational efficiencies in various industries.

Keywords: artificial intelligence, blockchain, data integrity, smart contracts

Procedia PDF Downloads 28
3522 Navigating Cyber Attacks with Quantum Computing: Leveraging Vulnerabilities and Forensics for Advanced Penetration Testing in Cybersecurity

Authors: Sayor Ajfar Aaron, Ashif Newaz, Sajjat Hossain Abir, Mushfiqur Rahman

Abstract:

This paper examines the transformative potential of quantum computing in the field of cybersecurity, with a focus on advanced penetration testing and forensics. It explores how quantum technologies can be leveraged to identify and exploit vulnerabilities more efficiently than traditional methods and how they can enhance the forensic analysis of cyber-attacks. Through theoretical analysis and practical simulations, this study highlights the enhanced capabilities of quantum algorithms in detecting and responding to sophisticated cyber threats, providing a pathway for developing more resilient cybersecurity infrastructures.

Keywords: cybersecurity, cyber forensics, penetration testing, quantum computing

Procedia PDF Downloads 33
3521 A Proposed Framework for Software Redocumentation Using Distributed Data Processing Techniques and Ontology

Authors: Laila Khaled Almawaldi, Hiew Khai Hang, Sugumaran A. l. Nallusamy

Abstract:

Legacy systems are crucial for organizations, but their intricacy and lack of documentation pose challenges for maintenance and enhancement. Redocumentation of legacy systems is vital for automatically or semi-automatically creating documentation for software lacking sufficient records. It aims to enhance system understandability, maintainability, and knowledge transfer. However, existing redocumentation methods need improvement in data processing performance and document generation efficiency. This stems from the necessity to efficiently handle the extensive and complex code of legacy systems. This paper proposes a method for semi-automatic legacy system re-documentation using semantic parallel processing and ontology. Leveraging parallel processing and ontology addresses current challenges by distributing the workload and creating documentation with logically interconnected data. The paper outlines challenges in legacy system redocumentation and suggests a method of redocumentation using parallel processing and ontology for improved efficiency and effectiveness.

Keywords: legacy systems, redocumentation, big data analysis, parallel processing

Procedia PDF Downloads 27
3520 Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns

Authors: Sayor Ajfar Aaron, Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz

Abstract:

This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios.

Keywords: artificial intelligence, convolutional neural network, emotion recognition, vocal patterns

Procedia PDF Downloads 28
3519 Surface to the Deeper: A Universal Entity Alignment Approach Focusing on Surface Information

Authors: Zheng Baichuan, Li Shenghui, Li Bingqian, Zhang Ning, Chen Kai

Abstract:

Entity alignment (EA) tasks in knowledge graphs often play a pivotal role in the integration of knowledge graphs, where structural differences often exist between the source and target graphs, such as the presence or absence of attribute information and the types of attribute information (text, timestamps, images, etc.). However, most current research efforts are focused on improving alignment accuracy, often along with an increased reliance on specific structures -a dependency that inevitably diminishes their practical value and causes difficulties when facing knowledge graph alignment tasks with varying structures. Therefore, we propose a universal knowledge graph alignment approach that only utilizes the common basic structures shared by knowledge graphs. We have demonstrated through experiments that our method achieves state-of-the-art performance in fair comparisons.

Keywords: knowledge graph, entity alignment, transformer, deep learning

Procedia PDF Downloads 27
3518 Reinforcement Learning for Self Driving Racing Car Games

Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh

Abstract:

This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.

Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming

Procedia PDF Downloads 24
3517 Exploring Deep Neural Network Compression: An Overview

Authors: Ghorab Sara, Meziani Lila, Rubin Harvey Stuart

Abstract:

The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method.

Keywords: model compression, deep neural network, pruning, knowledge distillation, quantization, low-rank decomposition

Procedia PDF Downloads 27
3516 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 22
3515 Improving Security Features of Traditional Automated Teller Machines-Based Banking Services via Fingerprint Biometrics Scheme

Authors: Anthony I. Otuonye, Juliet N. Odii, Perpetual N. Ibe

Abstract:

The obvious challenges faced by most commercial bank customers while using the services of ATMs (Automated Teller Machines) across developing countries have triggered the need for an improved system with better security features. Current ATM systems are password-based, and research has proved the vulnerabilities of these systems to heinous attacks and manipulations. We have discovered by research that the security of current ATM-assisted banking services in most developing countries of the world is easily broken and maneuvered by fraudsters, majorly because it is quite difficult for these systems to identify an impostor with privileged access as against the authentic bank account owner. Again, PIN (Personal Identification Number) code passwords are easily guessed, just to mention a few of such obvious limitations of traditional ATM operations. In this research work also, we have developed a system of fingerprint biometrics with PIN code Authentication that seeks to improve the security features of traditional ATM installations as well as other Banking Services. The aim is to ensure better security at all ATM installations and raise the confidence of bank customers. It is hoped that our system will overcome most of the challenges of the current password-based ATM operation if properly applied. The researchers made use of the OOADM (Object-Oriented Analysis and Design Methodology), a software development methodology that assures proper system design using modern design diagrams. Implementation and coding were carried out using Visual Studio 2010 together with other software tools. Results obtained show a working system that provides two levels of security at the client’s side using a fingerprint biometric scheme combined with the existing 4-digit PIN code to guarantee the confidence of bank customers across developing countries.

Keywords: fingerprint biometrics, banking operations, verification, ATMs, PIN code

Procedia PDF Downloads 24
3514 Machine Learning-Driven Prediction of Cardiovascular Diseases: A Supervised Approach

Authors: Thota Sai Prakash, B. Yaswanth, Jhade Bhuvaneswar, Marreddy Divakar Reddy, Shyam Ji Gupta

Abstract:

Across the globe, there are a lot of chronic diseases, and heart disease stands out as one of the most perilous. Sadly, many lives are lost to this condition, even though early intervention could prevent such tragedies. However, identifying heart disease in its initial stages is not easy. To address this challenge, we propose an automated system aimed at predicting the presence of heart disease using advanced techniques. By doing so, we hope to empower individuals with the knowledge needed to take proactive measures against this potentially fatal illness. Our approach towards this problem involves meticulous data preprocessing and the development of predictive models utilizing classification algorithms such as Support Vector Machines (SVM), Decision Tree, and Random Forest. We assess the efficiency of every model based on metrics like accuracy, ensuring that we select the most reliable option. Additionally, we conduct thorough data analysis to reveal the importance of different attributes. Among the models considered, Random Forest emerges as the standout performer with an accuracy rate of 96.04% in our study.

Keywords: support vector machines, decision tree, random forest

Procedia PDF Downloads 22
3513 Credibility and Personal Social Media Use of Health Professionals: A Field Study

Authors: Abrar Al-Hasan

Abstract:

Objectives: There is ongoing discourse regarding the potential risks to health professionals' reputations and credibility arising from their personal social media use. However, the specific impacts on professional credibility and the health professional-client relationship remain largely unexplored. This study aims to investigate the type and frequency of the content posted by health professionals on their Instagram accounts and its influence on their credibility and the professional-client relationship. Methodology: In a controlled field study, participants reviewed randomly assigned mock Instagram profiles of health professionals. Mock profiles were constructed according to gender (female/male), social media usage (high/low), and social media richness (high/ low), with richness increasing from posts to stories to reels and personal content type (high /low). Participants then rated the profile owners’ credibility on a visual analog scale. An analysis of variance compared these ratings, and mediation analyses assessed the influence of credibility ratings on participants' willingness to become clients of the mock health professional. Results: Results from 315 participants showed that health professionals with personal Instagram profiles displaying high social media richness were perceived as more credible than those with lower social media richness. Low social media usage is perceived as more credible than high social media usage. Personal content type is perceived as less credible as compared to those with low personal content type. Contributions: These findings provide initial evidence of the impact of health professionals' personal online disclosures on credibility and the health professional-client relationship. Understanding public perceptions of professionalism and credibility is essential for informing e-professionalism guidelines and promoting best practices in social media use among health professionals.

Keywords: credibility, consumer behavior, social media, media richness, healthcare professionals

Procedia PDF Downloads 25
3512 An Efficient Mitigation Plan to Encounter Various Vulnerabilities in Internet of Things Enterprises

Authors: Umesh Kumar Singh, Abhishek Raghuvanshi, Suyash Kumar Singh

Abstract:

As IoT networks gain popularity, they are more susceptible to security breaches. As a result, it is crucial to analyze the IoT platform as a whole from the standpoint of core security concepts. The Internet of Things relies heavily on wireless networks, which are well-known for being susceptible to a wide variety of attacks. This article provides an analysis of many techniques that may be used to identify vulnerabilities in the software and hardware associated with the Internet of Things (IoT). In the current investigation, an experimental setup is built with the assistance of server computers, client PCs, Internet of Things development boards, sensors, and cloud subscriptions. Through the use of network host scanning methods and vulnerability scanning tools, raw data relating to IoT-based applications and devices may be collected. Shodan is a tool that is used for scanning, and it is also used for effective vulnerability discovery in IoT devices as well as penetration testing. This article presents an efficient mitigation plan for encountering vulnerabilities in the Internet of Things.

Keywords: internet of things, security, privacy, vulnerability identification, mitigation plan

Procedia PDF Downloads 23
3511 Adapted Intersection over Union: A Generalized Metric for Evaluating Unsupervised Classification Models

Authors: Prajwal Prakash Vasisht, Sharath Rajamurthy, Nishanth Dara

Abstract:

In a supervised machine learning approach, metrics such as precision, accuracy, and coverage can be calculated using ground truth labels to help in model tuning, evaluation, and selection. In an unsupervised setting, however, where the data has no ground truth, there are few interpretable metrics that can guide us to do the same. Our approach creates a framework to adapt the Intersection over Union metric, referred to as Adapted IoU, usually used to evaluate supervised learning models, into the unsupervised domain, which solves the problem by factoring in subject matter expertise and intuition about the ideal output from the model. This metric essentially provides a scale that allows us to compare the performance across numerous unsupervised models or tune hyper-parameters and compare different versions of the same model.

Keywords: general metric, unsupervised learning, classification, intersection over union

Procedia PDF Downloads 28
3510 Detection of Cyberattacks on the Metaverse Based on First-Order Logic

Authors: Sulaiman Al Amro

Abstract:

There are currently considerable challenges concerning data security and privacy, particularly in relation to modern technologies. This includes the virtual world known as the Metaverse, which consists of a virtual space that integrates various technologies and is therefore susceptible to cyber threats such as malware, phishing, and identity theft. This has led recent studies to propose the development of Metaverse forensic frameworks and the integration of advanced technologies, including machine learning for intrusion detection and security. In this context, the application of first-order logic offers a formal and systematic approach to defining the conditions of cyberattacks, thereby contributing to the development of effective detection mechanisms. In addition, formalizing the rules and patterns of cyber threats has the potential to enhance the overall security posture of the Metaverse and, thus, the integrity and safety of this virtual environment. The current paper focuses on the primary actions employed by avatars for potential attacks, including Interval Temporal Logic (ITL) and behavior-based detection to detect an avatar’s abnormal activities within the Metaverse. The research established that the proposed framework attained an accuracy of 92.307%, resulting in the experimental results demonstrating the efficacy of ITL, including its superior performance in addressing the threats posed by avatars within the Metaverse domain.

Keywords: security, privacy, metaverse, cyberattacks, detection, first-order logic

Procedia PDF Downloads 26
3509 TimeTune: Personalized Study Plans Generation with Google Calendar Integration

Authors: Chevon Fernando, Banuka Athuraliya

Abstract:

The purpose of this research is to provide a solution to the students’ time management, which usually becomes an issue because students must study and manage their personal commitments. "TimeTune," an AI-based study planner that provides an opportunity to maneuver study timeframes by incorporating modern machine learning algorithms with calendar applications, is unveiled as the ideal solution. The research is focused on the development of LSTM models that connect to the Google Calendar API in the process of developing learning paths that would be fit for a unique student's daily life experience and study history. A key finding of this research is the success in building the LSTM model to predict optimal study times, which, integrating with the real-time data of Google Calendar, will generate the timetables automatically in a personalized and customized manner. The methodology encompasses Agile development practices and Object-Oriented Analysis and Design (OOAD) principles, focusing on user-centric design and iterative development. By adopting this method, students can significantly reduce the tension associated with poor study habits and time management. In conclusion, "TimeTune" displays an advanced step in personalized education technology. The fact that its application of ML algorithms and calendar integration is quite innovative is slowly and steadily revolutionizing the lives of students. The excellence of maintaining a balanced academic and personal life is stress reduction, which the applications promise to provide for students when it comes to managing their studies.

Keywords: personalized learning, study planner, time management, calendar integration

Procedia PDF Downloads 27