World Academy of Science, Engineering and Technology
[Computer and Information Engineering]
Online ISSN : 1307-6892
3705 Analytical Study of CPU Scheduling Algorithms
Authors: Keshav Rathi, Aakriti Sharma, Vinayak R. Dinesh, Irfan Ramzan Parray
Abstract:
Scheduling is a basic operating system function since practically all computer resources are scheduled before use. The CPU is one of the most important computer resources. Central Processing Unit (CPU) scheduling is vital because it allows the CPU to transition between processes. A processor is the most significant resource in a computer; the operating system can increase the computer's productivity. The objective of the operating system is to allow as many processes as possible to operate at the same time in order to maximize CPU utilization. The highly efficient CPU scheduler is based on the invention of high-quality scheduling algorithms that meet the scheduling objectives. In this paper, we reviewed various fundamental CPU scheduling algorithms for a single CPU and showed which algorithm is best for the particular situation.Keywords: computer science, Operating system, CPU scheduling, cpu algorithms
Procedia PDF Downloads 33704 Brief Guide to Cloud-Based AI Prototyping: Key Insights from Selected Case Studies Using Google Cloud Platform
Authors: Kamellia Reshadi, Pranav Ragji, Theodoros Soldatos
Abstract:
Recent advancements in cloud computing and storage, along with rapid progress in artificial intelligence (AI), have transformed approaches to developing efficient, scalable applications. However, integrating AI with cloud computing poses challenges as these fields are often disjointed, and many advancements remain difficult to access, obscured in complex documentation or scattered across research reports. For this reason, we share experiences from prototype projects combining these technologies. Specifically, we focus on Google Cloud Platform (GCP) functionalities and describe vision and speech activities applied to labeling, subtitling, and urban traffic flow tasks. We describe challenges, pricing, architecture, and other key features, considering the goal of real-time performance. We hope our demonstrations provide not only essential guidelines for using these functionalities but also enable more similar approaches.Keywords: artificial intelligence, cloud computing, real-time applications, case studies, knowledge management, research and development, text labeling, video annotation, urban traffic analysis, public safety, prototyping, Google Cloud Platform
Procedia PDF Downloads 93703 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 73702 Implementation of Digital Technologies in SMEs in Kazakhstan: A Pathway to Sustainable Development
Authors: Toibayeva Shara, Zainolda Fariza, Abylkhassenova Dina, Zholdybaev Baurzhan, Almassov Nurbek, Aldabergenov Ablay
Abstract:
The article explores the opportunities and challenges associated with the adoption of digital technologies and automation in small and medium-sized businesses (SMEs) in Kazakhstan to achieve the Sustainable Development Goals (SDGs). Key aspects such as improving production efficiency, reducing carbon footprint, and resource efficiency are discussed, as well as the challenges faced by companies, including limited access to finance and lack of knowledge about digital solutions. Based on an analysis of existing practices, recommendations are offered to improve digital infrastructure and create an enabling environment for SMEs to increase their competitiveness and adaptability in the face of global change. The introduction of innovative technologies is seen as an important step towards long-term sustainability and successful business development in Kazakhstan. The study was supported by grants from the Ministry of Science and Higher Education of the Republic of Kazakhstan (grant No. AP23488459) ‘Research and development of scientific and methodological foundations of an intelligent system of management of medium and small businesses in Kazakhstan’.Keywords: small and medium-sized businesses, digitalization, automation, sustainable development, sustainable development goals, innovation, competitiveness
Procedia PDF Downloads 53701 Deepfake Detection for Compressed Media
Authors: Sushil Kumar Gupta, Atharva Joshi, Ayush Sonawale, Sachin Naik, Rajshree Khande
Abstract:
The usage of artificially created videos and audio by deep learning is a major problem of the current media landscape, as it pursues the goal of misinformation and distrust. In conclusion, the objective of this work targets generating a reliable deepfake detection model using deep learning that will help detect forged videos accurately. In this work, CelebDF v1, one of the largest deepfake benchmark datasets in the literature, is adopted to train and test the proposed models. The data includes authentic and synthetic videos of high quality, therefore allowing an assessment of the model’s performance against realistic distortions.Keywords: deepfake detection, CelebDF v1, convolutional neural network (CNN), xception model, data augmentation, media manipulation
Procedia PDF Downloads 73700 Plant Disease Detection Using Image Processing and Machine Learning
Authors: Sanskar, Abhinav Pal, Aryush Gupta, Sushil Kumar Mishra
Abstract:
One of the critical and tedious assignments in agricultural practices is the detection of diseases on vegetation. Agricultural production is very important in today’s economy because plant diseases are common, and early detection of plant diseases is important in agriculture. Automatic detection of such early diseases is useful because it reduces control efforts in large productive farms. Using digital image processing and machine learning algorithms, this paper presents a method for plant disease detection. Detection of the disease occurs on different leaves of the plant. The proposed system for plant disease detection is simple and computationally efficient, requiring less time than learning-based approaches. The accuracy of various plant and foliar diseases is calculated and presented in this paper.Keywords: plant diseases, machine learning, image processing, deep learning
Procedia PDF Downloads 63699 A Contribution to Blockchain Privacy
Authors: Malika Yaici, Feriel Lalaoui, Lydia Belhoul
Abstract:
As a new distributed point-to-point (P2P) technology, blockchain has become a very broad field of research, addressing various challenges including privacy preserving as is the case in all other technologies. In this work, a study of the existing solutions to the problems related to private life in general and in blockchains in particular is performed. User anonymity and transaction confidentiality are the two main challenges for the protection of privacy in blockchains. Mixing mechanisms and cryptographic solutions respond to this problem but remain subject to attacks and suffer from shortcomings. Taking into account these imperfections and the synthesis of our study, we present a mixing model without trusted third parties, based on group signatures allowing reinforcing the anonymity of the users, the confidentiality of the transactions, with minimal turnaround time and without mixing costs.Keywords: anonymity, blockchain, mixing coins, privacy
Procedia PDF Downloads 63698 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection
Authors: Amir Shahab Shahabi, Mohsen Hasirian
Abstract:
Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks
Procedia PDF Downloads 123697 Deep Reinforcement Learning and Generative Adversarial Networks Approach to Thwart Intrusions and Adversarial Attacks
Authors: Fabrice Setephin Atedjio, Jean-Pierre Lienou, Frederica F. Nelson, Sachin S. Shetty
Abstract:
Malicious users exploit vulnerabilities in computer systems, significantly disrupting their performance and revealing the inadequacies of existing protective solutions. Even machine learning-based approaches, designed to ensure reliability, can be compromised by adversarial attacks that undermine their robustness. This paper addresses two critical aspects of enhancing model reliability. First, we focus on improving model performance and robustness against adversarial threats. To achieve this, we propose a strategy by harnessing deep reinforcement learning. Second, we introduce an approach leveraging generative adversarial networks to counter adversarial attacks effectively. Our results demonstrate substantial improvements over previous works in the literature, with classifiers exhibiting enhanced accuracy in classification tasks, even in the presence of adversarial perturbations. These findings underscore the efficacy of the proposed model in mitigating intrusions and adversarial attacks within the machine learning landscape.Keywords: machine learning, reliability, adversarial attacks, deep-reinforcement learning, robustness
Procedia PDF Downloads 83696 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework
Authors: Ma Cecilia Siva
Abstract:
This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.Keywords: tokenized, sigmoid activation, transformer, multi category classification
Procedia PDF Downloads 73695 Robust Recognition of Locomotion Patterns via Data-Driven Machine Learning in the Cloud Environment
Authors: Shinoy Vengaramkode Bhaskaran, Kaushik Sathupadi, Sandesh Achar
Abstract:
Human locomotion recognition is important in a variety of sectors, such as robotics, security, healthcare, fitness tracking and cloud computing. With the increasing pervasiveness of peripheral devices, particularly Inertial Measurement Units (IMUs) sensors, researchers have attempted to exploit these advancements in order to precisely and efficiently identify and categorize human activities. This research paper introduces a state-of-the-art methodology for the recognition of human locomotion patterns in a cloud environment. The methodology is based on a publicly available benchmark dataset. The investigation implements a denoising and windowing strategy to deal with the unprocessed data. Next, feature extraction is adopted to abstract the main cues from the data. The SelectKBest strategy is used to abstract optimal features from the data. Furthermore, state-of-the-art ML classifiers are used to evaluate the performance of the system, including logistic regression, random forest, gradient boosting and SVM have been investigated to accomplish precise locomotion classification. Finally, a detailed comparative analysis of results is presented to reveal the performance of recognition models.Keywords: artificial intelligence, cloud computing, IoT, human locomotion, gradient boosting, random forest, neural networks, body-worn sensors
Procedia PDF Downloads 93694 Impact of Information Technology Systems on the Recruitment Process in Morocco
Authors: Brahim Bellali, Fatima Bellali
Abstract:
The integration of information technology systems (ITS) into a company's ‘human resources processes seems to be the appropriate solution to the problem of evolving and adapting its human resources management practices in order to be both more strategic and more efficient in terms of costs and service quality. In this context, the aim of this work is to study the impact of information technology systems (ITS) on the recruitment process. In this study, we targeted candidates who had recruited using IT tools. The target population consists of 34 candidates based in Casablanca, Morocco. In order to collect the data, a questionnaire had to be drawn up. The survey is based on a data sheet and a questionnaire that is divided into several sections to make it more structured and comprehensible. The results show that the majority of respondents say that companies are making greater use of online CV libraries and social networks as digital solutions during the recruitment process. The results also show that 50% of candidates say that the use of digital tools by companies would not slow them down when applying for a job and that these IT tools improve manual recruitment processes, while 44.1% think that they facilitate recruitment without any human intervention. The majority of respondents (52.9%) think that social networks are the digital solutions most often used by recruiters in the sourcing phase. The constraints of digital recruitment encountered are the dehumanization of human resources (44.1%) and the limited interaction during remote interviews (44.1%), which leaves no room for informal exchanges. Digital recruitment can be a highly effective strategy for finding qualified candidates in a variety of fields. Here are a few recommendations for optimizing your digital recruitment process: (1) Use online recruitment platforms: LinkedIn, Twitter, and Facebook ; (2) Use applicant tracking systems (ATS) ; (3) Develop a content marketing strategy.Keywords: IT systems, recruitment, challenges, constraints
Procedia PDF Downloads 93693 Review of Speech Recognition Research on Low-Resource Languages
Authors: XuKe Cao
Abstract:
This paper reviews the current state of research on low-resource languages in the field of speech recognition, focusing on the challenges faced by low-resource language speech recognition, including the scarcity of data resources, the lack of linguistic resources, and the diversity of dialects and accents. The article reviews recent progress in low-resource language speech recognition, including techniques such as data augmentation, end to-end models, transfer learning, and multi-task learning. Based on the challenges currently faced, the paper also provides an outlook on future research directions. Through these studies, it is expected that the performance of speech recognition for low resource languages can be improved, promoting the widespread application and adoption of related technologies.Keywords: low-resource languages, speech recognition, data augmentation techniques, NLP
Procedia PDF Downloads 103692 Employing Nudge as Artistic Strategy in Managing Lagos Waste Issues
Authors: Iranlade Festus Adeyem
Abstract:
This paper analyses the role played by the Nudge method as an artistic strategy in addressing the issues of Lagos waste management in Nigeria. As a Lagosian, experiential knowledge of Lagos’ dirty environment through careless littering, especially in the Lagos Mainland community, was helpful. Employing Nudge theory in creative waste recycling assists in persuading Lagosians through strategic sensitization to carefully weigh their options rather than being compelled to act in a dictated direction. Empirical awareness of Lagos’ environment and creative, reflective experiences were handy in inspiring the identified communities to subtly encourage the reuse, recycling and repurposing of generated waste instead of dumping it indiscriminately. The repurposed waste used to ‘upcycle’ and ‘downcycle’ contemporary artworks were displayed to highlight single-use materials as improvised alternatives to conventional ones. The Nudge concept application, therefore, persuades Lagosians, Lagos artists and trainees to see waste as untapped effective materials during the campaigns. Using the Nudge philosophy thus encourages Lagosians and creatives to use personal discretion in managing their generated waste naturally. Its application also helped intervene minimally in the Lagos waste objectives to prevent the attendant health issues that may occur. And inspire waste improvisation for the scarce, imported and expensive art materials in Lagos City.Keywords: improvisation, nudge, upcycle and downcycle, strategy
Procedia PDF Downloads 83691 Securing Online Voting With Blockchain and Smart Contracts
Authors: Anant Mehrotra, Krish Phagwani
Abstract:
Democratic voting is vital for any country, but current methods like ballot papers or EVMs have drawbacks, including transparency issues, low voter turnout, and security concerns. Blockchain technology offers a potential solution by providing a secure, decentralized, and transparent platform for e-voting. With features like immutability, security, and anonymity, blockchain combined with smart contracts can enhance trust and prevent vote tampering. This paper explores an Ethereum-based e-voting application using Solidity, showcasing a web app that prevents duplicate voting through a token-based system, while also discussing the advantages and limitations of blockchain in digital voting. Voting is a crucial component of democratic decision-making, yet current methods, like paper ballots, remain outdated and inefficient. This paper reviews blockchain-based voting systems, highlighting strategies and guidelines to create a comprehensive electronic voting system that leverages cryptographic techniques, such as zero-knowledge proofs, to enhance privacy. It addresses limitations of existing e-voting solutions, including cost, identity management, and scalability, and provides key insights for organizations looking to design their own blockchain-based voting systems.Keywords: electronic voting, smart contracts, blockchain nased voting, security
Procedia PDF Downloads 83690 Explaining the Acceptance and Adoption of Digital Technologies: Digital Government in Saudi Arabia
Authors: Mohammed Alhamed
Abstract:
This research examines the factors influencing the acceptance and adoption of digital technologies in Saudi Arabia’s government sector by focusing on government employees' attitudes toward digital transformation initiatives. As digital technologies increasingly integrate into public sectors worldwide, there is a requirement to enhance citizen empowerment and government-public interactions as well as understand their impact in unique socio-political contexts like Saudi Arabia. The study aims to explore user attitudes, identify the main challenges, and investigate factors that affect the intention to use digital applications in governmental settings. The study employs a mixed-methods approach by combining quantitative and qualitative data collection to provide a comprehensive view of digital government application adoption. Data was collected through two online surveys administered to 870 government employees and face-to-face semi-structured interviews with 24 participants. This dual approach allows for both statistical analysis and thematic exploration, which provides a deeper understanding of user behaviour, perceived benefits, challenges and attitudes toward these digital applications. Quantitative data were analyzed to identify significant variables influencing adoption, while qualitative responses were coded thematically to uncover recurring themes related to user trust, security, usability and socio-political influences. The results indicate that digital government applications are largely valued for their ability to increase efficiency and accessibility and streamline processes like online documentation and inter-departmental coordination. However, the study highlights that security, privacy, and confidentiality concerns constitute substantial barriers to adoption, with participants calling for stronger cybersecurity measures and data protection policies. Moreover, usability emerged as a key theme that intuitively interfaces in encouraging adoption as respondents emphasized the importance of user-friendly. Additionally, the study found that Saudi Arabia’s unique cultural and organizational dynamics impact acceptance levels with factors like hierarchical structures and varying levels of digital literacy shaping user attitudes. A significant limitation of the study is its exclusive focus on government employees, which may limit the generalizability of the findings to other stakeholder groups, such as the general public. Despite this, the study offers valuable views for policymakers. This, in turn, suggests best practices and guidelines that could enhance the design and implementation of digital government projects. By addressing the identified barriers and leveraging the factors that drive adoption, the study underscores the potential for digital government initiatives to improve efficiency, transparency and responsiveness in Saudi Arabia's public sector. Furthermore, these findings may provide a roadmap for similar countries aiming to adopt digital government solutions within comparable socio-political and economic contexts.Keywords: acceptance, adoption, digital technologies, digital government, Saudi Arabia
Procedia PDF Downloads 113689 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 163688 Natural User Interface Adapter: Enabling Natural User Interface for Non-Natural User Interface Applications
Authors: Vijay Kumar Kolagani, Yingcai Xiao
Abstract:
Adaptation of Natural User Interface (NUI) has been slow and limited. NUI devices like Microsoft’s Kinect and Ultraleap’s Leap Motion can only interact with a handful applications that were specifically designed and implemented for them. A NUI device just can’t be used to directly control millions of applications that are not designed to take NUI input. This is in the similar situation like the adaptation of color TVs. At the early days of color TV, the broadcasting format was in RGB, which was not viewable by blackand-white TVs. TV broadcasters were reluctant to produce color programs due to limited viewership. TV viewers were reluctant to buy color TVs because there were limited programs to watch. Color TV’s breakthrough moment came after the adaptation of NTSC standard which allowed color broadcasts to be compatible with the millions of existing black-and-white TVs. This research presents a framework to use NUI devices to control existing non-NUI applications without reprogramming them. The methodology is to create an adapter to convert input from NUI devices into input compatible with that generated by CLI (Command Line Input) and GUI (Graphical User Interface) devices. The CLI/GUI compatible input is then sent to the active application through the operating system just like any input from a CLI/GUI device to control the non-NUI program that the user is controlling. A sample adapter has been created to convert input from Kinect to keyboard strokes, so one can use the input from Kinect to control any applications that take keyboard input, such as Microsoft’s PowerPoint. When the users use the adapter to control their PowerPoint presentations, they can free themselves from standing behind a computer to use its keyboard and can roam around in front of the audience to use hand gestures to control the PowerPoint. It is hopeful such adapters can accelerate the adaptation of NUI devices.Keywords: command line input, graphical user interface, human computer interaction, natural user interface, NUI adapter
Procedia PDF Downloads 143687 Parallel Coordinates on a Spiral Surface for Visualizing High-Dimensional Data
Authors: Chris Suma, Yingcai Xiao
Abstract:
This paper presents Parallel Coordinates on a Spiral Surface (PCoSS), a parallel coordinate based interactive visualization method for high-dimensional data, and a test implementation of the method. Plots generated by the test system are compared with those generated by XDAT, a software implementing traditional parallel coordinates. Traditional parallel coordinate plots can be cluttered when the number of data points is large or when the dimensionality of the data is high. PCoSS plots display multivariate data on a 3D spiral surface and allow users to see the whole picture of high-dimensional data with less cluttering. Taking advantage of the 3D display environment in PCoSS, users can further reduce cluttering by zooming into an axis of interest for a closer view or by moving vantage points and by reorienting the viewing angle to obtain a desired view of the plots.Keywords: human computer interaction, parallel coordinates, spiral surface, visualization
Procedia PDF Downloads 113686 On the Resilience of Operational Technology Devices in Penetration Tests
Authors: Marko Schuba, Florian Kessels, Niklas Reitz
Abstract:
Operational technology (OT) controls physical processes in critical infrastructures and economically important industries. With the convergence of OT with classical information technology (IT), rising cybercrime worldwide and the increasingly difficult geopolitical situation, the risks of OT infrastructures being attacked are growing. Classical penetration testing, in which testers take on the role of an attacker, has so far found little acceptance in the OT sector - the risk that a penetration test could do more harm than good seems too great. This paper examines the resilience of various OT systems using typical penetration test tools. It is shown that such a test certainly involves risks, but is also feasible in OT if a cautious approach is taken. Therefore, OT penetration testing should be considered as a tool to improve the cyber security of critical infrastructures.Keywords: penetration testing, OT, ICS, OT security
Procedia PDF Downloads 143685 Determining the Target Level of Knowledge of English as a Foreign Language in Higher Education
Authors: Zorana Z. Jurinjak, Nataša B. Lukić, Christos G. Alexopoulos
Abstract:
Although in the last few decades, English as a foreign language has been a compulsory subject in almost all colleges and universities in Serbia, students who enter the first year come with different levels of knowledge, which is immense task and a burden on teachers not only which literature and how to conduct classes in heterogeneous groups but also how to evaluate and assess the progress.This paper aims to discuss the issue of determining the target level of knowledge of English as a foreign language in higher education in Serbia due to the great need for these levels to equalize. The research was conducted at several colleges and universities where first-year students took a placement test, and we also carried out a review and comparison of the literature used in teaching English in those schools. We hope that this research will not only raise the awareness of those in charge when making curriculums, but also that ways will be found to assimilate these differences in knowledge and establish the criteria in assessment.Keywords: higher education, EFL, levels of knowledge, evaluation, assessment
Procedia PDF Downloads 113684 ALEF: An Enhanced Approach to Arabic-English Bilingual Translation
Authors: Abdul Muqsit Abbasi, Ibrahim Chhipa, Asad Anwer, Saad Farooq, Hassan Berry, Sonu Kumar, Sundar Ali, Muhammad Owais Mahmood, Areeb Ur Rehman, Bahram Baloch
Abstract:
Accurate translation between structurally diverse languages, such as Arabic and English, presents a critical challenge in natural language processing due to significant linguistic and cultural differences. This paper investigates the effectiveness of Facebook’s mBART model, fine-tuned specifically for sequence-tosequence (seq2seq) translation tasks between Arabic and English, and enhanced through advanced refinement techniques. Our approach leverages the Alef Dataset, a meticulously curated parallel corpus spanning various domains to capture the linguistic richness, nuances, and contextual accuracy essential for high-quality translation. We further refine the model’s output using advanced language models such as GPT-3.5 and GPT-4, which improve fluency, coherence, and correct grammatical errors in translated texts. The fine-tuned model demonstrates substantial improvements, achieving a BLEU score of 38.97, METEOR score of 58.11, and TER score of 56.33, surpassing widely used systems such as Google Translate. These results underscore the potential of mBART, combined with refinement strategies, to bridge the translation gap between Arabic and English, providing a reliable, context-aware machine translation solution that is robust across diverse linguistic contexts.Keywords: natural language processing, machine translation, fine-tuning, Arabic-English translation, transformer models, seq2seq translation, translation evaluation metrics, cross-linguistic communication
Procedia PDF Downloads 73683 Evaluating Data Maturity in Riyadh's Nonprofit Sector: Insights Using the National Data Maturity Index (NDI)
Authors: Maryam Aloshan, Imam Mohammad Ibn Saud, Ahmad Khudair
Abstract:
This study assesses the data governance maturity of nonprofit organizations in Riyadh, Saudi Arabia, using the National Data Maturity Index (NDI) framework developed by the Saudi Data and Artificial Intelligence Authority (SDAIA). Employing a survey designed around the NDI model, data maturity levels were evaluated across 14 dimensions using a 5-point Likert scale. The results reveal a spectrum of maturity levels among the organizations surveyed: while some medium-sized associations reached the ‘Defined’ stage, others, including large associations, fell within the ‘Absence of Capabilities’ or ‘Building’ phases, with no organizations achieving the advanced ‘Established’ or ‘Pioneering’ levels. This variation suggests an emerging recognition of data governance but underscores the need for targeted interventions to bridge the maturity gap. The findings point to a significant opportunity to elevate data governance capabilities in Saudi nonprofits through customized capacity-building initiatives, including training, mentorship, and best practice sharing. This study contributes valuable insights into the digital transformation journey of the Saudi nonprofit sector, aligning with national goals for data-driven governance and organizational efficiency.Keywords: nonprofit organizations-national data maturity index (NDI), Saudi Arabia- SDAIA, data governance, data maturity
Procedia PDF Downloads 143682 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 73681 TerraEnhance: High-Resolution Digital Elevation Model Generation using GANs
Authors: Siddharth Sarma, Ayush Majumdar, Nidhi Sabu, Mufaddal Jiruwaala, Shilpa Paygude
Abstract:
Digital Elevation Models (DEMs) are digital representations of the Earth’s topography, which include information about the elevation, slope, aspect, and other terrain attributes. DEMs play a crucial role in various applications, including terrain analysis, urban planning, and environmental modeling. In this paper, TerraEnhance is proposed, a distinct approach for high-resolution DEM generation using Generative Adversarial Networks (GANs) combined with Real-ESRGANs. By learning from a dataset of low-resolution DEMs, the GANs are trained to upscale the data by 10 times, resulting in significantly enhanced DEMs with improved resolution and finer details. The integration of Real-ESRGANs further enhances visual quality, leading to more accurate representations of the terrain. A post-processing layer is introduced, employing high-pass filtering to refine the generated DEMs, preserving important details while reducing noise and artifacts. The results demonstrate that TerraEnhance outperforms existing methods, producing high-fidelity DEMs with intricate terrain features and exceptional accuracy. These advancements make TerraEnhance suitable for various applications, such as terrain analysis and precise environmental modeling.Keywords: DEM, ESRGAN, image upscaling, super resolution, computer vision
Procedia PDF Downloads 73680 Crop Recommendation System Using Machine Learning
Authors: Prathik Ranka, Sridhar K, Vasanth Daniel, Mithun Shankar
Abstract:
With growing global food needs and climate uncertainties, informed crop choices are critical for increasing agricultural productivity. Here we propose a machine learning-based crop recommendation system to help farmers in choosing the most proper crops according to their geographical regions and soil properties. We can deploy algorithms like Decision Trees, Random Forests and Support Vector Machines on a broad dataset that consists of climatic factors, soil characteristics and historical crop yields to predict the best choice of crops. The approach includes first preprocessing the data after assessing them for missing values, unlike in previous jobs where we used all the available information and then transformed because there was no way such a model could have worked with missing data, and normalizing as throughput that will be done over a network to get best results out of our machine learning division. The model effectiveness is measured through performance metrics like accuracy, precision and recall. The resultant app provides a farmer-friendly dashboard through which farmers can enter their local conditions and receive individualized crop suggestions.Keywords: crop recommendation, precision agriculture, crop, machine learning
Procedia PDF Downloads 143679 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data
Authors: Ramzi Rihane, Yassine Benayed
Abstract:
Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection
Procedia PDF Downloads 123678 Optimizing Machine Learning Through Python Based Image Processing Techniques
Authors: Srinidhi. A, Naveed Ahmed, Twinkle Hareendran, Vriksha Prakash
Abstract:
This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models.Keywords: image processing, machine learning applications, template matching, emotion detection
Procedia PDF Downloads 133677 The Intersection of Artificial Intelligence and Mathematics
Authors: Mitat Uysal, Aynur Uysal
Abstract:
Artificial Intelligence (AI) is fundamentally driven by mathematics, with many of its core algorithms rooted in mathematical principles such as linear algebra, probability theory, calculus, and optimization techniques. This paper explores the deep connection between AI and mathematics, highlighting the role of mathematical concepts in key AI techniques like machine learning, neural networks, and optimization. To demonstrate this connection, a case study involving the implementation of a neural network using Python is presented. This practical example illustrates the essential role that mathematics plays in training a model and solving real-world problems.Keywords: AI, mathematics, machine learning, optimization techniques, image processing
Procedia PDF Downloads 143676 Digital Games as a Means of Cultural Communication and Heritage Tourism: A Study on Black Myth-Wukong
Authors: Kung Wong Lau
Abstract:
On August 20, 2024, the global launch of the Wukong game generated significant enthusiasm within the gaming community. This game provides gamers with an immersive experience and some digital twins (the location) that effectively bridge cultural heritage and contemporary gaming, thereby facilitating heritage tourism to some extent. Travel websites highlight locations featured in the Wukong game, encouraging visitors to explore these sites. However, this area remains underexplored in cultural and communication studies, both locally and internationally. This pilot study aims to explore the potential of in-game cultural communication in Wukong for promoting Chinese culture and heritage tourism. An exploratory research methodology was employed, utilizing a focus group of non-Chinese active gamers on an online discussion platform. The findings suggest that the use of digital twins as a means to facilitate cultural communication and heritage tourism for non-Chinese gamers shows promise. While this pilot study cannot generalize its findings due to the limited number of participants, the insights gained could inform further discussions on the influential factors of cultural communication through gaming.Keywords: digital game, game culture, heritage tourism, cultural communication, non-Chinese gamers
Procedia PDF Downloads 18