World Academy of Science, Engineering and Technology
[Computer and Information Engineering]
Online ISSN : 1307-6892
4378 Text Mining Algorithm for Large-Scale Social Media Data
Authors: Alexander A. Kharlamov, Maria Pilgun
Abstract:
This paper presents the validation results of a text mining algorithm applied to urban system development in the transportation sector, leveraging user-generated content from social media platforms. The study employed sentiment analysis, aggression detection, semantic network and core formation, associative search, associative network development, and word association analysis. Data collection was conducted using the Brand Analytics social media and media monitoring system. Data analysis and interpretation utilized TextAnalyst 2.32, GPT-3.5, GPT-4, and GPT-4o, while Tableau was used for interactive visualization and analytics. Social tension levels were assessed through calculated indices of social stress and well-being. Based on the findings, recommendations were proposed to improve project effectiveness by integrating residents' perspectives.
Keywords: Social media, text mining, neural network technologies, large-scale data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134377 Using Cooperation Without Communication in a Multi-Agent Unpredictable Dynamic Real-Time Environment
Authors: Abbas Khosravi
Abstract:
This paper discusses the use of cooperation without communication in a multi-agent, unpredictable, dynamic real-time environment. The architecture of the Persian Gulf agent consists of three layers: fixed rule, low level, and high level layers, allowing for cooperation without direct communication. A scenario is presented to each agent in the form of a file, specifying each player's role and actions in the game. The scenario helps in cases of miscommunication, improving team performance. Cooperation without communication enhances reliability and coordination among agents, leading to better results in challenging situations.
Keywords: Multi-agent systems, cooperation without communication, Robocop, software engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144376 SynKit: A Event-Driven and Scalable Microservices-Based Kitting System
Authors: Bruno Nascimento, Cristina Wanzeller, Jorge Silva, João A. Dias, André Barbosa, José Ribeiro
Abstract:
The increasing complexity of logistics operations stems from evolving business needs, such as the shift from mass production to mass customisation, which demands greater efficiency and flexibility. In response, Industry 4.0 and 5.0 technologies provide improved solutions to enhance operational agility and better meet market demands. The management of kitting zones, combined with the use of Autonomous Mobile Robots, faces challenges related to coordination, resource optimisation, and rapid response to customer demand fluctuations. Additionally, implementing Lean Manufacturing practices in this context must be carefully orchestrated by intelligent systems and human operators to maximise efficiency without sacrificing the agility required in an advanced production environment. This paper proposes and implements a microservices-based architecture integrating principles from Industry 4.0 and 5.0 with Lean Manufacturing practices. The architecture enhances communication and coordination between autonomous vehicles and kitting management systems, allowing more efficient resource utilization and increased scalability. The proposed architecture focuses on the modularity and flexibility of operations, enabling seamless flexibility to change demands and efficiently allocate resources in real-time. Conducting this approach is expected to significantly improve logistics operations’ efficiency and scalability by reducing waste and optimising resource use while improving responsiveness to demand changes. The implementation of this architecture provides a robust foundation for the continuous evolution of kitting management and process optimisation. Designed to adapt to dynamic environments marked by rapid shifts in production demands and real-time decision-making. It also ensures seamless integration with automated systems, aligning with Industry 4.0 and 5.0 needs while reinforcing Lean Manufacturing principles.
Keywords: Microservices, event-driven, kitting, lean manufacturing, industry 4.0, industry 5.0.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 234375 Digital Games as a Means of Cultural Communication and Heritage Tourism: A Study on Black Myth - Wukong
Authors: Kung Wong Lau
Abstract:
On August 20, 2024, the global launch of the Wukong game generated significant enthusiasm within the gaming community. This game provides gamers with an immersive experience and some digital twins (the location) that effectively bridge cultural heritage and contemporary gaming, thereby facilitating heritage tourism to some extent. Travel websites highlight locations featured in the Wukong game, encouraging visitors to explore these sites. However, this area remains underexplored in cultural and communication studies, both locally and internationally. This pilot study aims to explore the potential of in-game cultural communication in Wukong for promoting Chinese culture and heritage tourism. An exploratory research methodology was employed, utilizing a focus group of non-Chinese active gamers on an online discussion platform. The findings suggest that the use of digital twins as a means to facilitate cultural communication and heritage tourism for non-Chinese gamers shows promise. While this pilot study cannot generalize its findings due to the limited number of participants, the insights gained could inform further discussions on the influential factors of cultural communication through gaming.
Keywords: Digital game, game culture, heritage tourism, cultural communication, non-Chinese gamers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204374 Software User Experience Enhancement through User-Centred Design and Co-design Approach
Authors: Shan Wang, Fahad Alhathal, Hari Krishhnaa Subramanian
Abstract:
User-centred design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviours of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023 in the UK; it aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centred design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight co-design workshops with a diverse group of 11 individuals. Throughout these co-design workshops, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement within three insights. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centred design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centred design process in software design to enhance overall user experiences.
Keywords: User experience design, user-centred design, co-design approach, knowledge management tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174373 Survey on Fiber Optic Deployment for Telecommunications Operators in Ghana: Coverage Gap, Recommendations and Research Directions
Authors: Francis Padi, Solomon Nunoo, John Kojo Annan
Abstract:
This paper presents a comprehensive survey on the deployment of fiber optic networks for telecommunications operators in Ghana. It addresses the challenges encountered by operators using microwave transmission systems for backhauling traffic and emphasizes the advantages of deploying fiber optic networks. The study delves into the coverage gap, provides recommendations, and outlines research directions to enhance the telecommunications infrastructure in Ghana. Additionally, it evaluates next-generation optical access technologies and architectures tailored to operators' needs. The paper also investigates current technological solutions and regulatory, technical, and economical dimensions related to sharing mobile telecommunication networks in emerging countries. Overall, this paper offers valuable insights into fiber optic network deployment for telecommunications operators in Ghana and suggests strategies to meet the increasing demand for data and mobile applications.
Keywords: Fiber optic deployment, coverage gap, telecommunications operator, network expansion strategies, coverage challenges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 264372 Creation of a Realistic Railway Simulator Developed on a 3D Graphic Game Engine Using a Numerical Computing Programming Environment
Authors: Kshitij Ansingkar, Yohei Hoshino, Liangliang Yang
Abstract:
Advances in algorithms related to autonomous systems have made it possible to research on improving the accuracy of estimation of a train’s location. This has the capability of increasing the throughput of a railway network without the need to create additional infrastructure. To develop such a system, the railway industry requires data to test sensor fusion theories or implement simultaneous localization and mapping (SLAM) algorithms. Though, such simulation data and ground truth datasets are available for testing automation algorithms of vehicles, however, due to regulations and economic considerations there is a dearth of such datasets in the railway industry. Thus, there is a need for the creation of a simulation environment that can generate realistic synthetic datasets. This paper proposes (1) to leverage the capabilities of open-source 3D graphic rendering software to create a visualization of the environment; (2) to utilize open-source 3D geospatial data for accurate visualization; and (3) integrate the graphic rendering software with a programming language and numerical computing platform. To develop such an integrated platform this paper utilizes the computing platform’s advanced sensor models like LIDAR, camera, IMU or GPS and merges it with the 3D rendering of the game engine to generate high quality synthetic data. Further, these datasets can be used to train Railway models and improve accuracy of estimation algorithm of a train’s location.
Keywords: 3D game engine, 3D geospatial data, dataset generation, railway simulator, sensor fusion, SLAM, simultaneous localization and mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 354371 Hallucination Detection and Mitigation in Chatbot: A Multi-Agent Approach with Llama2
Authors: Md. Ashfaqur Rahman
Abstract:
Hallucination in Large Language Models (LLMs) poses a significant challenge in chatbot reliability, especially in critical domains such as healthcare, finance, and education. This paper presents a multi-agent approach to hallucination detection and mitigation using Llama2, integrating retrieval-based verification, fact-checking mechanisms, and response correction strategies. The proposed framework consists of specialized agents, including a Web Retrieval Agent that fetches factual information from external sources (e.g., Wikipedia, DuckDuckGo & Google Serper), Fact-Checking Agents that evaluate response accuracy using semantic similarity scoring, a Correction Agent that refines outputs when hallucinations are detected, and a Monitoring Agent that logs hallucination scores and calculates truthfulness metrics. Experimental results demonstrate that incorporating retrieval-augmented generation (RAG) and multi-agent verification significantly reduces hallucination rates. The study highlights the effectiveness of using Llama2 alongside external knowledge sources and multi-agent collaboration to improve chatbot reliability and factual accuracy. Future research will explore reinforcement learning for dynamic agent optimization and enhancing real-time fact verification methods.
Keywords: Hallucination detection, Llama2, multi-agent systems, retrieval-augmented generation, fact-checking, chatbot reliability, truth scoring, large language models, response correction, semantic similarity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694370 Redefining Health Information Systems with Machine Learning: Harnessing the Potential of AI-Powered Data Fusion Ecosystems
Authors: Shohoni Mahabub
Abstract:
Health Information Systems (HIS) are essential to contemporary healthcare; nonetheless, they frequently encounter challenges such as data fragmentation, inefficiencies, and an absence of real-time analytics. The advent of Machine Learning (ML) and Artificial Intelligence (AI) provides a revolutionary potential to address these difficulties via AI-driven data fusion ecosystems. These ecosystems integrate many health data sources, including Electronic Health Records (EHRs), wearable devices, and genetic data, with sophisticated ML techniques such as Natural Language Processing (NLP) and predictive analytics to produce actionable insights. Through the integration of strong data intake layers, secure interoperability protocols, and privacy-preserving models, these ecosystems provide individualized treatment, early illness diagnosis, and enhanced operational efficiency. This paradigm change enhances clinical decision-making and rectifies systemic inefficiencies in healthcare delivery. Nonetheless, adoption presents problems such as data privacy concerns, ethical considerations, and scalability constraints. The study examines options such as federated learning for safe, decentralized data sharing, explainable AI for transparency, and cloud-based infrastructure for scalability to address these issues. These ecosystems aim to address health equity disparities, particularly in resource-limited environments, and improve public health surveillance, notably in pandemic response initiatives. This article emphasizes the revolutionary potential of AI-driven data fusion ecosystems in redefining HIS by providing an implementation roadmap and showcasing successful deployment case studies. The suggested method promotes a cooperative initiative among legislators, healthcare professionals, and technology to establish a cohesive, efficient, and patient-centric healthcare model.
Keywords: AI-powered healthcare systems, data fusion ecosystem, predictive analytics, digital health interoperability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444369 Integrating Optuna and Synthetic Data Generation for Optimized Medical Transcript Classification Using BioBERT
Authors: Sachi Nandan Mohanty, Shreya Sinha, Sweeti Sah, Shweta Sharma
Abstract:
The advancement of natural language processing has majorly influenced the field of medical transcript classification, providing a robust framework for enhancing the accuracy of clinical data processing. It has enormous potential to transform healthcare and improve people's livelihoods. This research focuses on improving the accuracy of medical transcript categorization using Bidirectional Encoder Representations from Transformers (BERT) and its specialized variants, including BioBERT, ClinicalBERT, SciBERT, and BlueBERT. The experimental work employs Optuna, an optimization framework, for hyperparameter tuning to identify the most effective variant, concluding that BioBERT yields the best performance. Furthermore, various optimizers, including Adam, RMSprop, and Layerwise adaptive large batch optimization (LAMB), were evaluated alongside BERT's default AdamW optimizer. The findings show that the LAMB optimizer achieves a performance that is equally good as AdamW's. Synthetic data generation techniques from Gretel were utilized to augment the dataset, expanding the original dataset from 5,000 to 10,000 rows. Subsequent evaluations demonstrated that the model maintained its performance with synthetic data, with the LAMB optimizer showing marginally better results. The enhanced dataset and optimized model configurations improved classification accuracy, showcasing the efficacy of the BioBERT variant and the LAMB optimizer. It resulted in an accuracy of up to 98.2% and 90.8% for the original and combined datasets.
Keywords: BioBERT, clinical data, healthcare AI, transformer models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 304368 Augmented Reality Applications for Active Learning in Geometry: Enhancing Mathematical Intelligence at Phra Dabos School
Authors: Nattamon Srithammee, Ratchanikorn Chonchaiya
Abstract:
This study explores the impact of Augmented Reality (AR) technology on mathematics education, focusing on Area and Volume concepts at Phra Dabos School in Thailand. We developed a mobile AR application to present these mathematical concepts innovatively. Using a mixed-methods approach, we assessed the knowledge of 79 students before and after using the application. The results showed a significant improvement in students' understanding of Area and Volume, with average test scores increasing from 3.70 to 9.04 (p < 0.001, Cohen's d = 2.05). Students also reported increased engagement and satisfaction. Our findings suggest that AR technology can be a valuable tool in mathematics education, particularly for enhancing the understanding of abstract concepts like Area and Volume. This study contributes to research on educational technology in STEM education and provides insights for educators and educational technology developers.
Keywords: Augmented reality, mathematics education, area and volume, educational technology, STEM education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934367 Exploring Cybersecurity and Phishing Attacks within Healthcare Institutions in Saudi Arabia: A Narrative Review
Authors: Ebtesam Shadadi, Rasha Ibrahim, Essam Ghadafi
Abstract:
Phishing poses a significant threat as a cybercrime by tricking end users into revealing their confidential and sensitive information. Attackers often manipulate victims to achieve their malicious goals. The increasing prevalence of phishing has led to extensive research on this issue, including studies focusing on phishing attempts in healthcare institutions in the Kingdom of Saudi Arabia. This paper explores the importance of analyzing phishing attacks, specifically focusing on those targeting the healthcare industry. The study delves into the tactics, obstacles, and remedies associated with these attacks, all while considering the implications for Saudi Vision 2030.
Keywords: Phishing, cybersecurity, cyber threat, social engineering, Vision 2030.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374366 Automation of AAA Game Development Using AI
Authors: Paul Toprac, Branden Heng, Harsheni Siddharthan, Allison Tseng, Sarah Abraham, Etienne Vouga
Abstract:
The goal of this project was to evaluate and document the capabilities and limitations of AI tools for empowering small teams to create high budget, high profile (AAA) 3D games typically developed by large studios. Two teams of novice game developers attempted to create two different games using AI and Unreal Engine 5.3. First, the teams evaluated 60 AI art, design, sound, and programming tools by considering their capability, ease of use, cost, and license restrictions. Then, the teams used a shortlist of 12 AI tools for game development. During this process, the following tools were found to be the most productive: ChatGPT 4.0 for both game and narrative concepting and documentation; Dall-E 3 and OpenArt for concept art; Beatoven for music drafting; ChatGPT 4.0 and Github Copilot for generating simple code and to complement human-made tutorials as an additional learning resource. While current generative AI may appear impressive at first glance, the assets they produce fall short of AAA industry standards. Generative AI tools are helpful when brainstorming ideas such as concept art and basic storylines, but they still cannot replace human input or creativity at this time. Regarding programming, AI can only effectively generate simple code and act as an additional learning resource. Thus, generative AI tools are at best tools to enhance developer productivity rather than as a system to replace developers.
Keywords: AAA Games, artificial intelligence, automation tools, game development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534365 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes a Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A Hybrid Attention Mechanism is also presented, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents an approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.
Keywords: Hybrid attention mechanism, image reconstruction, real-time, road condition recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534364 Generative AI: A Comparison of CTGAN and CTGAN with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault
Authors: Lakshmi Prayaga, Chandra Prayaga. Aaron Wade, Gopi Shankar Mallu, Harsha Satya Pola
Abstract:
Synthetic data generated by Generative Adversarial Networks and Autoencoders are becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic Generative Adversarial Networks (GAN) such as the Gaussian copula. We present the results from two synthetic data sets Conditional Tabular Generative Adversarial Network (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the Receiver Operating Characteristic Curve ROC and Area Under the curve AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN.
Keywords: Synthetic data generation, Generative Adversarial Networks, GANs, Conditional Tabular GAN, CTGAN, Gaussian copula.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474363 Distinct Method to Measure the Quality of 2D Image Compression Techniques
Authors: Mohammed H. Rasheed, Hussein Nadhem Fadhel, Mohammed M. Siddeq
Abstract:
In this paper, we presented tools for evaluating image quality that effectively align with human perception, emphasizing their usefulness in assessing the visual quality of images. These tools offer quantitative metrics to facilitate the comparison of various image compression algorithms. Specifically, we propose two metrics designed to measure the quality of decompressed images. These metrics utilize combined data (CD) derived from both the original and decompressed images to deliver accurate assessments. By comparing the results of our proposed metrics with widely used standards such as Peak Signal-to-Noise Ratio (PSNR) and Root Mean Square Error (RMSE), we demonstrate that our approach provides a closer match to human visual perception of image quality. This alignment underscores the practical application of the proposed metrics in scenarios requiring subjective evaluation accuracy.
Keywords: RMSE, Root Mean Square Error, PSNR, Peak Signal-to-Noise Ratio, image quality metrics, image compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394362 On the Resilience of Operational Technology Devices in Penetration Tests
Authors: Florian Kessels, Niklas Reitz, Marko Schuba
Abstract:
Operational technology (OT) controls physical processes in critical infrastructures and economically important industries. With the convergence of OT with classical information technology (IT), rising cybercrime worldwide and the increasingly difficult geopolitical situation, the risks of OT infrastructures being attacked are growing. Classical penetration testing, in which testers take on the role of an attacker, has so far found little acceptance in the OT sector - the risk that a penetration test could do more harm than good seems too great. This paper examines the resilience of various OT systems using typical penetration test tools. It is shown that such a test certainly involves risks, but is also feasible in OT if a cautious approach is taken. Therefore, OT penetration testing should be considered as a tool to improve the cyber security of critical infrastructures.
Keywords: Penetration testing, operational technology, industrial control systems, operational technology security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 654361 Redefining “Infrastructure as Code” Orchestration Using AI
Authors: Georges Bou Ghantous
Abstract:
This research delves into the transformative impact of Artificial Intelligence (AI) on Infrastructure as Code (IaC) practices, specifically focusing on the redefinition of infrastructure orchestration. By harnessing AI technologies such as machine learning algorithms and predictive analytics, organizations can achieve unprecedented levels of efficiency and optimization in managing their infrastructure resources. AI-driven IaC introduces proactive decision-making through predictive insights, enabling organizations to anticipate and address potential issues before they arise. Dynamic resource scaling, facilitated by AI, ensures that infrastructure resources can seamlessly adapt to fluctuating workloads and changing business requirements. Through case studies and best practices, this paper sheds light on the tangible benefits and challenges associated with AI-driven IaC transformation, providing valuable insights for organizations navigating the evolving landscape of digital infrastructure management.
Keywords: Artificial intelligence, AI, infrastructure as code, IaC, efficiency optimization, predictive insights, dynamic resource scaling, proactive decision-making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034360 Reinforcement Learning for Self Driving Racing Car Games
Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh
Abstract:
This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.
Keywords: Reinforcement learning, soft actor-critic, Deep Q-Network, Self-driving cars, artificial intelligence, Gaming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284359 “Bring Your Own Device” Security Model in a Financial Institution of South Africa
Authors: Michael Nthabiseng Moeti, Makhulu Relebogile Langa, Joey Jansen van Vuuren
Abstract:
This paper examines the utilization of personal electronic devices like laptops, tablets, and smartphones for professional duties within a financial organization. This phenomenon is known as bring your own device (BYOD). BYOD accords employees the freedom to use their personal devices to access corporate resources from anywhere in the world with Internet access. BYOD arrangements introduce significant security risks for both organizations and users. These setups change the threat landscape for enterprises and demand unique security strategies, as conventional tools tailored for safeguarding managed devices fall short in adequately protecting enterprise assets without active user cooperation. This paper applies protection motivation theory (PMT) to highlight behavioral risks from BYOD users that may impact the security of financial institutions. Thematic analysis was applied to gain a comprehensive understanding of how users perceive this phenomenon. These findings demonstrates that the existence of a security policy does not ensure that all employees will take measures to protect their personal devices. Active promotion of BYOD security policies is crucial for financial institution employees and management. This paper developed a BYOD security model which is useful for understanding compliant behaviors. Given that BYOD security is becoming a major concern across financial sector. The paper recommends that future research could expand the number of universities from which data are collected.
Keywords: Bring your own device, information security, protection motivation theory, security risks, thematic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 984358 Cybersecurity and Children: Ensuring Online Safety
Authors: H. Brodeur, B. Ferdousi
Abstract:
As children gain access to the Internet at a younger age, it is essential for those working with children to know the dangers of the Internet and how to protect the children. This article explores the dangers of the Internet for children in addition to effective methods to combat child exploitation. There is also a call for specific institutions working with children to act to protect them, including schools, parental and guardian figures, and the government. This paper analyses the current dangers children face online, examines previously implemented practices from various institutions, and provides suggestions on how these practices can be improved for the modern era.
Keywords: Cybercrime, cybersecurity, cyberbullying, cyberstalking, security awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1044357 Ensemble of Deep Convolutional Neural Network Architecture for Classifying the Source and Quality of Teff Cereal
Authors: Belayneh Matebie, Michael Melese
Abstract:
The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, the current study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data are collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including Fine-tunned Visual Geometry Group (FVGG16), Fine-tunned InceptionV3 (FINCV3), Quality and Source Classification of Teff Cereal (QSCTC), Ensemble Method for Quality and Source Classification of Teff Cereal (EMQSCTC), Support Vector Machine (SVM), and Random Forest (RF) are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.
Keywords: Teff, ensemble learning, Max-Voting, Convolutional Neural Network, Support Vector Machine, Random Forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074356 Existence of Rational Primitive Normal Pairs with Prescribed Norm and Trace
Authors: Soniya Takshak, R. K. Sharma
Abstract:
Let q be a prime power and n be a positive integer, Fq stands for the finite field of q elements, and Fqn denotes the extension of Fq of degree n. Also, F∗q represents the multiplicative group of non-zero elements of Fq, and the generators of F∗q are called primitive elements. A normal element of a finite field Fqn is an element α such that the set of α and its all conjugates in Fqn forms a basis for Fqn over Fq. Primitive normal elements have several applications in coding theory and cryptography. So, establishing the existence of primitive normal elements under certain conditions is theoretically essential and a genuine issue. In this article, we provide a sufficient condition for the existence of a primitive normal element α in Fqn of a prescribed primitive norm and non-zero trace over Fq such that f(α) is also primitive, where f(x) is a rational function of degree sum m over Fqn. Particularly, for the rational functions of degree sum 4 over Fqn, where Fq is the field of characteristic 11 and n is greater than or equal to 7, we demonstrated that there are only 3 exceptional pairs (q, n) for which such kind of primitive normal elements may not exist. In general, we show that such elements always exist except for finitely many choices of (q, n). We used additive and multiplicative character sums as important tools to arrive at our conclusion.
Keywords: Finite Field, Primitive Element, Normal Element, norm, trace, character.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714355 An Attentional Bi-Stream Sequence Learner for Credit Card Fraud Detection
Authors: Mohsen Hasirian, Amir Shahab Shahabi
Abstract:
Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study presents an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanism and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences, while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.
Keywords: Attention mechanism, credit card fraud, deep learning, recurrent neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564354 API Security in Embedded and Open Finance
Authors: Andrew John Zeller, Artjoms Formulevics
Abstract:
Banking and financial services are rapidly transitioning from being monolithic structures focusing merely on their own financial offerings to becoming integrated players in multiple customer journeys and supply chains. Banks themselves are refocusing on being liquidity providers and underwriters in these networks, while the general concept of ‘embeddedness’ builds on the market readily available API (Application Programming Interface) architectures to flexibly deliver services to various requestors, i.e., online retailers who need finance and insurance products to better serve their customers, respectively. With this new flexibility come new requirements for enhanced cybersecurity. API structures are more decentralized and inherently prone to change. Unfortunately, this has not been comprehensively addressed in the literature. This paper tries to fill this gap by looking at security approaches and technologies relevant to API architectures found in embedded finance. After presenting the research methodology applied and introducing the major bodies of knowledge involved, the paper will discuss six dominating technology trends shaping high-level financial services architectures. Subsequently, embedded finance and the respective usage of API strategies will be described. Building on this, security considerations for APIs in financial and insurance services will be elaborated on before concluding with some ideas for possible further research.
Keywords: embedded finance, embedded banking strategy, cybersecurity, API management, data security, cybersecurity, IT management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634353 Study of Deep Learning-Based Model for Recognizing Human Activities in IoT Applications
Authors: Tarunima Chatterjee, Pinaki Pratim Acharjya
Abstract:
Advanced neural network-based human activity recognition (HAR) system integration with Internet of Things technology is progressing quickly. This technique, which has important implications in the fields of fitness, healthcare, and smart home environments, correctly detects and categorizes human actions from sensor data using sensors and deep learning algorithms. This work presents an approach that combines multi-head CNNs with an attention mechanism, producing a detection rate of 95.4%. Traditional HAR systems are generally imprecise and inefficient. Data collection, spectrogram image conversion, feature extraction, optimization, and classification are all steps in the procedure. With its deep learning foundation, this HAR system has enormous potential for real-time activity monitoring, especially in the healthcare industry, where it may enhance safety and offer insightful data on user behaviour.
Keywords: Deep learning, Human Activity Recognition, HAR, Internet of Things, IoT, Convolutional Neural Networks, CNNs, Long Short-Term Memory, LSTM, neural machine translation, NMT, Inertial Measurement Unit, IMU, Gated Recurrent Units, GRUs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3454352 Machine Learning Algorithms in Study of Student Performance Prediction in Virtual Learning Environment
Authors: Shilpa Patra, Pinaki Pratim Acharjya
Abstract:
One of the biggest challenges in education today is accurately forecasting student achievement. Identifying learners who require more support early on can have a big impact on their educational performance. Developing a theoretical framework that forecasts online learning outcomes for students in a virtual learning environment (VLE) using machine learning techniques is the aim of this study. Resolving the flaws in different forecasting models and increasing accuracy are major goals of the study.
Keywords: Virtual Learning Environments, K-Nearest Neighbors, KNN, Random Forest, Extra Trees.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034351 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: X. Zeng, L. Cheng, Z. Li, X. Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed in this paper. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.
Keywords: Archaeology drawings, digital heritage, image generation, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4514350 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management in a number of impactful ways. It allows machines to interpret information to find patterns in large volumes of data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-based decision-making. The introduction of an “artificial brain” into the organization also allows it to learn from complex information and data provided by those who train it, namely its users. The “Mastering” Serious Game introduces the concept of a context-sensitive “Virtual Assistant” (VA), which provides users with useful suggestions for optimizing processes and creating value for stakeholders. The VA helps to identify in real time the bottleneck(s) in the operations system so that it is possible to act on them quickly, the resources that should be multi-skilled to make the system more efficient and in which specific processes it might be advantageous to partner with another team(s). The possible solutions are evaluated using the Key Performance Indicators (KPIs) considered in the Balanced Scorecard (BSC), allowing actions to be monitored to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. Systemic analysis and data interpretation are enhanced in Assisted-BIGAMES through a computer application that the players can use. The role of each team's AV is to guide the players to be more effective in their decision-making, providing recommendations based on AI methods. It is important to note that the AV's suggestions for action can be accepted or rejected by the coaches of each team, as they must take into account their own experience and knowledge to support their decision-making. The “Serious Game Coordinator” is responsible for supporting the players with whom he debates points of view that help make decision-making more robust. All inputs must be analyzed and evaluated by each team, which must add “Emotional Intelligence” - an essential component missing from the machine learning process. The preliminary results obtained in “Mastering” show that the introduction of AV allows for faster learning of the decision-making process.
Keywords: Artificial Intelligence, AI, Balanced Scorecard, Gamification, Key Performance Indicators, KPIs, Machine Learning, ML, Management Simulators, Serious Games, Virtual Assistant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5514349 Enhancing Email Security: A Multi-Layered Defense Strategy Approach and an AI-Powered Model for Identifying and Mitigating Phishing Attacks
Authors: Anastasios Papathanasiou, George Liontos, Athanasios Katsouras, Vasiliki Liagkou, Euripides Glavas
Abstract:
Email remains a crucial communication tool due to its efficiency, accessibility and cost-effectiveness, enabling rapid information exchange across global networks. However, the global adoption of email has also made it a prime target for cyber threats, including phishing, malware and Business Email Compromise (BEC) attacks, which exploit its integral role in personal and professional realms in order to perform fraud and data breaches. To combat these threats, this research advocates for a multi-layered defense strategy incorporating advanced technological tools such as anti-spam and anti-malware software, machine learning algorithms and authentication protocols. Moreover, we developed an artificial intelligence model specifically designed to analyze email headers and assess their security status. This AI-driven model examines various components of email headers, such as "From" addresses, ‘Received’ paths and the integrity of SPF (Sender Policy Framework), DKIM (Domain Keys Identified Mail) and DMARC (Domain-based Message Authentication, Reporting and Conformance) records. Upon analysis, it generates comprehensive reports that indicate whether an email is likely to be malicious or benign. This capability empowers users to identify potentially dangerous emails promptly, enhancing their ability to avoid phishing attacks, malware infections and other cyber threats.
Keywords: Email security, artificial intelligence, header analysis, threat detection, phishing, Sender Policy Framework, Domain Keys Identified Mail, Domain-based Message Authentication, Reporting and Conformance, AI, Artificial Intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88