World Academy of Science, Engineering and Technology
[Computer and Information Engineering]
Online ISSN : 1307-6892
3748 Transforming Enterprise Contract Management: AI-Driven Recommendations, Blockchain Integration, and Smart Contracting
Authors: Jeffery Dickerson, Thaija Dickerson
Abstract:
Enterprise contract management faces persistent challenges, including inefficiencies, limited adaptability, and fragmented compliance processes. This research proposes a unified framework leveraging artificial intelligence (AI), blockchain technology, and smart contracts to transform the contract lifecycle. AI-driven probabilistic models enable predictive insights, automated compliance checks, and negotiation optimization, while blockchain enhances security and transparency through cryptographic workflows and decentralized approvals. The framework bridges centralized architectures of Web 2.0 with Web 3.0 technologies, which leverage blockchain for decentralized, trustless operations, ensuring seamless transformation and operational continuity. Smart contracts automate routine processes, enabling dynamic, programmable agreements. Validation through simulations demonstrates significant improvements, including up to a 60% reduction in contract cycle times under simulated conditions and enhanced compliance rates. Designed for scalability and adaptability, the framework supports industries such as procurement, supply chain, and finance, where secure, efficient contract management is critical. By incorporating human validation loops and aligning with sustainability goals, this study offers a scalable, innovative approach to the digital transformation of contract management.Keywords: contract lifecycle management, artificial intelligence, blockchain, smart contracts
Procedia PDF Downloads 23747 Architectures and Implementations of Data Spaces: A Comparative Study of Gaia-X and Eclipse Data Space Components Frameworks
Authors: Ryan Kelvin Ford
Abstract:
For individuals and organizations, significant potential benefits were assured by sharing the data in a secure, trusted, and standardized environment. Technical trust and standards help each participant to use data space securely to share and access data. Sharing data in a safe environment helps acquire new business opportunities. Data sovereignty, interoperability, and trust were considered key factors to evaluate data spaces. Businesses and policymakers assure a fair data economy by integrating data space in organizations. A collaborative environment was needed to facilitate data sharing among organizations, satisfied with the implementation of different architectures using data spaces such as Eclipse Data Space Components (EDC), International Data Space Association (IDSA), Gaia-X, and Gaia-X Federation Services (GXFS). The last 15 years of application were reviewed and compared based on the architectures and implementations of different data spaces such as IDSA, EDC, Gaia-X and GXFS, EDC framework, IDSA, GXFS, data connector, data space architecture, characteristics of data space connectors, federated data spaces initiatives, data spaces overview, eclipse data space connector, designing data spaces, building data spaces based on technical overview, European future digital ecosystem based on Gaia-Vision and strategy of Gaia-Architecture. Empirical research based on an organized view was conducted. The current discussion elaborates on the systematic review of the impact of data space technology from various perspectives. The systematic review uses multiple databases such as IEEE Explore, Taylor & Francis, Science Direct, and Google Scholar to pursue publications on the impact of Data space from January 2019 to December 2024. The search results showcased a comparative review of 150 articles, out of which 20 were related to the IDSA, Gaia‑X, and EDC architecture and implementation.Keywords: IDSA, Gaia-X, Gaia-X architecture, EDC, EDC architecture, GXFS architecture, IDSA, data space connector
Procedia PDF Downloads 83746 Traffic Management Using Artificial Intelligence
Authors: Vamsi Krishna Movva
Abstract:
Artificial intelligence (AI) has revolutionized traffic management in modern cities by enhancing efficiency, safety, and sustainability. This study explores the transformative role of AI-driven systems, including adaptive traffic lights, real-time incident detection, and coordinated signals, in improving urban traffic flow. Additionally, AI-powered navigation systems utilizing real-time GPS and sensor data offer more efficient and safer travel options. This study employs a mixed-methods approach combining quantitative traffic data analysis and qualitative surveys from traffic management authorities. The study also delves into AI’s application in law enforcement, monitoring traffic violations, detecting distracted driving, and reconstructing accidents to analyze causes and responsibilities. Furthermore, the research highlights the environmental and economic benefits of AI in traffic management, such as reduced emissions and energy savings, while addressing challenges like data privacy concerns and high implementation costs. Ultimately, this paper emphasizes AI’s potential to shape sustainable traffic systems and promote efficient transportation networks.Keywords: artificial intelligence, traffic management, urban congestion, traffic safety, real-time data
Procedia PDF Downloads 83745 Image Retrieval Using Discrete Cosine Transform of Diagonal Projections
Authors: Saleh Ali Alshehri, Omar Tarek Subaih, Mohammed Saad Alghamdi
Abstract:
With the vast visual contents of social media and Internet applications, fast and simple image-retrieval systems are necessary. Content-based image-retrieval methods are suitable even though the AI methods start becoming dominant. In this study, a simple and efficient method is presented. An image is binarized and then divided diagonally into two triangles. The projections along both sides of the diagonal are calculated. Discrete cosine transform is applied to these projections, and few coefficients are retained. The Euclidean distance method is then used to search for the image in a dataset of images. The method takes a fraction of a second to retrieve an image from a dataset of 1327 images.Keywords: content-based image retrieval, diagonal projections, discrete cosine transform, Euclidean distance
Procedia PDF Downloads 63744 A Review of Pothole Detection Using Different Technologies
Authors: Ashwini Jarali, Prajwal Lalpotu, Shreya Jadhav, Snehal Kavathekar, Sanskruti Lad
Abstract:
This paper reviews recent advancements in pothole detection technologies, comparing various methods, including deep learning models like YOLO (You Only Look Once) and SSD (Single Shot Detector) and UAV-based systems with multispectral imaging. YOLO v8 Nano emerges as a highly effective model, balancing speed and accuracy in real-time detection, while SSD demonstrates superior precision in certain scenarios. Additionally, UAVs enhance detection by providing early insights into asphalt damage. Image processing techniques and manually labeled datasets are also employed to improve model training and accuracy. The paper evaluates the strengths and limitations of these methods, examining factors like computational efficiency, environmental adaptability, and real-time application. It further explores future directions in this field, focusing on optimizing detection techniques and integrating advanced sensors to enhance road safety and maintenance.Keywords: YOLO(You Look Only Once), Pothole Detection, YOLOV8, YOLOV5
Procedia PDF Downloads 83743 Simulation of Cybersecurity Attacks and Detection Using Machine Learning Techniques with Virtual Local Area Networks Integration
Authors: Sankenth Jalwad, Satyam, Suteerth Kalkeri, Vidula L. S., Geetha Dayalan
Abstract:
In today’s cyber landscape, threats are emerging every single day; they are much more advanced and dynamic than in the past within this cyber landscape. This project focuses on Virtual Local Area Networks or VLANs. VLANs provide the compartmentalization of sensitive information and optimal management of traffic but introduce specific vulnerabilities. Attackers also target VLAN configurations for exploitation of some security holes, such as VLAN hopping. The aim is to deal with such security requirements by developing a machine learning-based IDS for the VLAN environment that identifies in real time the patterns and anomalies signifying possible attacks. Apart from the IDS, it also looks at the generation of cyberattack datasets specific to VLANs with the help of Wireshark that will help train the ML model.Keywords: cybersecurity, machine learning, VLAN networks, DTP, STP
Procedia PDF Downloads 103742 Dots to Dialogue: Enhancing Accessibility through Braille Image-to-Speech Conversion
Authors: Shwetha B. S., Sirisha M., Vachana U., Aditya Kadlimatti, Manjushree N. S.
Abstract:
Braille script holds significant importance in bridging the communication gap for visually impaired individuals. However, the challenge of interpreting Braille for non-experts creates barriers in education and day-to-day interactions. This paper aims to develop a system that translates Braille text into multilingual speech using advanced Convolutional Neural Networks (CNNs) and Google Text-to-Speech (GTTS) technology. The proposed system employs image recognition techniques powered by CNNs to accurately identify and decode Braille characters from captured images. The deep learning model undergoes training on a diverse dataset of Braille symbols to ensure high accuracy and robustness. Among the models evaluated, AlexNet demonstrated the highest accuracy in decoding Braille characters. Once recognized, the decoded text is converted into speech in the user’s preferred language using the GTTS API. This system possesses the ability to greatly improve inclusivity by enabling real-time Braille interpretation for visually impaired individuals, educators, and caregivers.Keywords: convolutional neural networks, Braille image, image-to-speech, GTTS, AlexNet, VGG16, DenseNet121, ResNet50
Procedia PDF Downloads 43741 Automatic Detection Of Diabetic Retinopathy
Authors: Zaoui Ismahene, Bahri Sidi Mohamed, Abbassa Nadira
Abstract:
Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness among individuals with diabetes. Early diagnosis is crucial for effective treatment, yet current diagnostic methods rely heavily on manual analysis of retinal images, which can be time-consuming and prone to subjectivity. This research proposes an automated system for the detection of DR using Jacobi wavelet-based feature extraction combined with Support Vector Machines (SVM) for classification. The integration of wavelet analysis with machine learning techniques aims to improve the accuracy, efficiency, and reliability of DR diagnosis. In this study, retinal images are preprocessed through normalization, resizing, and noise reduction to enhance the quality of the images. The Jacobi wavelet transform is then applied to extract both global and local features, effectively capturing subtle variations in retinal images that are indicative of DR. These extracted features are fed into an SVM classifier, known for its robustness in handling high-dimensional data and its ability to achieve high classification accuracy. The SVM classifier is optimized using parameter tuning to improve performance. The proposed methodology is evaluated using a comprehensive dataset of retinal images, encompassing a range of DR severity levels. The results show that the proposed system outperforms traditional wavelet-based methods, demonstrating significantly higher accuracy, sensitivity, and specificity in detecting DR. By leveraging the discriminative power of Jacobi wavelet features and the robustness of SVM, the system provides a promising solution for the automatic detection of DR, which could assist ophthalmologists in early diagnosis and intervention, ultimately improving patient outcomes. This research highlights the potential of combining wavelet-based image processing with machine learning for advancing automated medical diagnostics.Keywords: iabetic retinopathy (DR), Jacobi wavelets, machine learning, feature extraction, classification
Procedia PDF Downloads 93740 Modern Approaches to Kidney Stone Detection with Using Machine Learning
Authors: Jayashree Katti, Harsh Warkari, Prachi Yadav, Bhagyashri Chaudhari
Abstract:
Approximately ten percent of individuals globally suffer from kidney stones, which can cause major side effects, including renal damage and blockage of the urinary tract. Traditional detection techniques depend on the manual evaluation of CT or X-ray images, which is not easy and may contain errors. With the aim to enhance kidney stone detection using medical imaging, this research explores various machine learning methods, such as Convolutional Neural Networks (CNN). By reviewing many machine learning algorithms, like ensemble techniques, Decision Tree, Random Forest, and Support Vector Machines (SVM), this study shows that machine learning tends to improve accuracy and reduce kidney stone detection time. According to the results of the earlier research, ensemble methods produced a classification accuracy of 97.95%, whereas the Decision Tree Classifier obtained an F1 score of 85.3%. Ensemble approaches gave a classification accuracy of 97.95%. Advanced techniques utilizing transfer learning, such as ALEXNET, achieved an accuracy rate of 96%.Keywords: kidney stones, machine learning, medical imaging, CNN, transfer learning, decision tree, ensemble methods, random forest, SVM, ALEXNET
Procedia PDF Downloads 83739 Evolving Application Design and Development Engineering: Bridging the Gap Between Software Architecture and User-Centric Solutions in Modern Digital Ecosystems
Authors: Ouaabbou Noureddine
Abstract:
Application design and development engineering represents a critical intersection between software engineering principles and user-centric design methodologies. This area meets the growing demand for professionals who can design, develop and deploy sophisticated software applications while ensuring optimal user experience. The curriculum encompasses basic programming concepts, database management, systems architecture, UI/UX design principles, and software testing methodologies. Students gain hands-on experience through real-world projects that reflect real-world software development cycles, from initial needs analysis to final deployment. This program prepares graduates for roles in software development, application architecture, and technical project management in response to the growing complexity of modern software applications and the evolving needs of the digital economy.Keywords: software engineering, application development, user experience, system architecture, software design methodologies
Procedia PDF Downloads 73738 Managing Networks and Systems in the Modern Security Landscape: An Integrated Approach to Infrastructure Resilience
Authors: Oussama Yadine, Abdellah Jamali
Abstract:
The rapid evolution of today's technology ecosystem, marked by the fusion of cloud computing, IoT, and distributed systems, has introduced complex security challenges in network and systems administration. Our research develops a framework that seamlessly merges contemporary systems administration with advanced security engineering methodologies, particularly focusing on DevSecOps implementation and zero-trust architectural principles. Comprehensive testing and analysis across diverse organizational environments reveal that this unified approach achieves remarkable results: a 47% decrease in security-related incidents while consistently maintaining 99.9% system uptime. The framework delivers actionable guidelines for deploying secure infrastructure architectures, automating compliance oversight, and implementing dynamic security protocols. This integration effectively eliminates the historical divide between systems administration and security engineering, fostering an environment where operational efficiency and security resilience coexist harmoniously.Keywords: network security, systems administration, security engineering, infrastructure resilience
Procedia PDF Downloads 53737 Impact of Artificial Intelligence in Some Sectors: Opportunities and Ethical Considerations
Authors: Umar Mohammed Pakra, Hayatu Saidu Alhaji
Abstract:
This paper explores the role of artificial intelligence (AI) in various sectors, emphasizing its opportunities and ethical considerations. As AI technologies become increasingly integrated into daily life, understanding their implications is crucial for ensuring responsible use. The study analyzes literature on AI's impact on meaningful work, healthcare, and education, highlighting both the potential benefits—such as improved efficiency and personalized services—and the ethical challenges, including privacy concerns, bias in decision-making, and the risk of dehumanization in the workplace. Employing thematic analysis, the research identifies key themes that emerge from the literature, including the need for ethical frameworks, human-centric design, and proactive measures to address privacy and bias issues. The findings underscore the importance of balancing innovation with ethical considerations to foster a more equitable and sustainable future in an AI-driven world. Recommendations for organizations and policymakers are provided, advocating for transparency, interdisciplinary collaboration, and user-centered approaches to AI development. By addressing these challenges, stakeholders can harness the full potential of AI while safeguarding human values and promoting societal well-being.Keywords: artificial intelligence, ethical considerations, meaningful work, privacy human-centric design
Procedia PDF Downloads 73736 Real Estate Price Classification Using Machine Learning Techniques
Authors: Hadeel Sulaiman Alamri, Mohamed Maher Ben Ismail, Ouiem Bchir
Abstract:
Abstract— The continued advances in Artificial Intelligence (AI) and Machine Learning (ML) have boosted the interest of tax authorities in developing smart solutions as efficient alternatives to their actual fraud detection mechanisms. In particular, the real estate data collected by the administrations promoted the efforts to develop advanced analytics models aimed at detecting fraudulent real estate transactions. Specifically, supervised and unsupervised Machine Learning techniques have been associated with the available large datasets to improve overall taxpayer compliance. This research introduces a machine-learning approach intended to classify land and building prices in Saudi Arabia. Specifically, it intends to group real estate transactions reported into homogeneous groups based on relevant features. Moreover, the proposed solution classifies the lands and buildings prices in Saudi city, neighborhood, and schema. In fact, the outcomes of the clustering task are fed into a supervised machine learning process to categorize future real estate transactions into “Fair”, “Under-valued” or “Over-valued” classes. In particular, the experimental findings indicate that associating clustering algorithms with Random Forest (RF) model yields an accuracy of 99%.Keywords: classification, clustering, machine learning, real estate price
Procedia PDF Downloads 103735 Insight Into Database Forensics
Authors: Enas K., Fatimah A., Abeer A., Ghadah A.
Abstract:
Database forensics is a specialized field of digital forensics that investigates and analyzes database systems to recover and evaluate data, particularly in cases of cyberattacks and data breaches. The increasing significance of securing data confidentiality, integrity, and availability has emphasized the need for robust forensic models to preserve data integrity and maintain the chain of evidence. Organizations rely on Database Forensic Investigation (DBFI) to protect critical data, maintain trust, and support legal actions in the event of breaches. To address the complexities of relational and non-relational databases, structured forensic frameworks and tools have been developed. These include the Three-Tier Database Forensic Model (TT-DF) for comprehensive investigations, blockchain-backed logging systems for enhanced evidence reliability, and the FORC tool for mobile SQLite database forensics. Such advancements facilitate data recovery, identify unauthorized access, and reconstruct events for legal proceedings. Practical demonstrations of these tools and frameworks further illustrate their real-world applicability, advancing the effectiveness of database forensics in mitigating modern cybersecurity threats.Keywords: database forensics, cybersecurity, SQLite forensics, digital forensics
Procedia PDF Downloads 83734 A Technical Overview of LLM-Powered Cover Letter Generation
Authors: Shivani Dinkar Patil, Shirlene Rose Bandela, Revati Vikas Bhavsar, Venkata Chaitanya K., Aryan Agrawal
Abstract:
This project outlines a significant challenge in the job application process: crafting a compelling and relevant cover letter. It highlights the limitations of existing AI-generated cover letter drafts, noting their generic nature and lack of personalization. This project aims at aiding candidates in securing their dream jobs by generating the best possible cover letter tailored to a specific job posting. This is achieved with minimal hassle, leveraging AI technologies to enhance personalization and context. The project distinguishes itself by focusing on the candidate's unique qualifications and experiences, ensuring the cover letter resonates with potential employers and stands out in a pool of applicants.Keywords: large language models, NLP, software engineering, prompt engineering
Procedia PDF Downloads 93733 The Effect of Artificial Intelligence on Digital Factory
Authors: Keroles Benyamen Shafik Benyamen
Abstract:
up to date making plans has the undertaking of designing products, flora, strategies, organization, areas, and the development of a up-to-date. The requirements for manufacturing facilityupdated making plans and the constructing of a up to date have modified in latest years. normal restructuring is turning inupupdated extra crucial up to date be able upupdated keep the competitiveness of a up to datefacupupdated. restrictions in new regions, shorter lifestyles cycles of product and manufacturing technology up-to-date a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) cause extra common restructuring measures inside a up to datefacupupdated. A virtual up-to-date model is the making plans foundation for rebuilding measures and up-to-date an integral up-to-date. quick-time period rescheduling can now not be treated by means of on-web site inspections and guide measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans models. up to datebecause of the high edition rate of facupdatedries defined above, a method for rescheduling facupdatedries on the idea of a current virtual up to date dual is conceived and designed for practical software in up-to-date restructuring projects. the point of interest is on rebuild processes. The goal is up-to-date keep the planning basis (digital up-to-date version) for conversions within a up to datery up-to-date. This calls for the software of a method that reduces the deficits of current approaches. The aim is up-to-date how a virtual up to datery version can be up-to-date up-to-date at some point of ongoing up to date operation. a way based on phoup-to-dategrammetry era is offered. the focus is on growing a easy and value-powerful approach upupdated music the many adjustments that occur in a manufacturing unit building at some point of operation. The technique is preceded by a hardware and software program contrast up to date pick out the most reasonably priced and quickest variation.Keywords: augmented reality, digital factory model, factory planning, restructuringdigital factory model, photogrammetry, restructuring
Procedia PDF Downloads 93732 Applying AI and IoT to Enhance Eye Vision Assessment, Early Detection of Eye Diseases, and Personalised Vision Correction
Authors: Gasim Alandjani
Abstract:
This research paper investigates the use of artificial intelligence (AI) and the Internet of Things (IoT) to improve eye healthcare; it concentrates on eye vision assessment, early discovery of eye ailments, and individualised vision correction. The study offers a broad review of literature and methodology; it features vital findings and inferences for advancing patient results, boosting admittance to care, elevating resource apportionment, and directing future research and practice. The study concluded that the assimilation of AI and IoT advancements provides progressive answers to traditional hurdles in eye healthcare, guaranteeing more precise, comprehensive, and individualised interventions for patients globally. The study emphasizes the significance of sustained innovation and the application of AI and IoT-driven methodologies to improve eye healthcare and vision for forthcoming generations.Keywords: AI, IoT, eye vision assessment, computer engineering
Procedia PDF Downloads 93731 TikTok: AI Driven Features and Participants' Reaction
Authors: Baylasan Al-Amoudi, Hala Abdulmajeed, Amjad Jilani
Abstract:
This project explores the role of artificial intelligence (AI) in enhancing user engagement on TikTok by examining the app’s AI-driven features. Through a structured survey of 4 main questions and experimental analysis, we tried to examine how TikTok’s recommendations, algorithms, search engine, and filter tools influence user interactions and satisfaction. A diverse cohort of 20 participants, including casual users and content creators, were involved to provide a broad perspective on user experiences. The examination highlights the recommendation algorithm’s ability to deliver highly personalized content, creating a seamless and engaging experience. TikTok’s search engine is shown to simplify content discovery by enabling users to find specific topics or trends related to their preferences. Meanwhile, the filter tools are found to encourage creativity, particularly for content creators, by offering versatile options to enhance video quality and visual appeal. By evaluating the unique roles of these AI features, the project underscores their significance in maintaining TikTok’s appeal and driving consistent user engagement.Keywords: TikTok, hashtags, filters, viral sounds, for you page
Procedia PDF Downloads 73730 User Experience and Impact of AI Features in AutoCAD
Authors: Sarah Alnafea, Basmah Alalsheikh, Hadab Alkathiri
Abstract:
For over 30 years, AutoCAD, a powerful CAD software developed by Autodesk, has been an imperative need for design in industries such as engineering, building, and architecture. With the emerge of advanced technology, AutoCAD has undergone a revolutionary change with the involvement of artificial intelligence capabilities that have enhanced the productivity and efficiency at work and quality in the design for the users. This paper investigates the role AI in AutoCAD, especially in intelligent automation, generative design, automated design ideas, natural language processing, and predictive analytics. To identify further, A survey among users was also conducted to assess the adoption and satisfaction of AI features and identify areas for improvement. The Competitive standing of AutoCAD is further crosschecked against other AI-enabled CAD software packages, including SolidWorks, Fusion 360, and Rhino.In this paper, an overview of the current impacts of AI in AutoCAD is given, along with some recommendations for the future road of AI development to meet users’ requirementsKeywords: artificail inteligence, natural language proccesing, intelligent automation, generative design
Procedia PDF Downloads 153729 Increasing the Speed of the Apriori Algorithm by Dimension Reduction
Authors: A. Abyar, R. Khavarzadeh
Abstract:
The most basic and important decision-making tool for industrial and service managers is understanding the market and customer behavior. In this regard, the Apriori algorithm, as one of the well-known machine learning methods, is used to identify customer preferences. On the other hand, with the increasing diversity of goods and services and the speed of changing customer behavior, we are faced with big data. Also, due to the large number of competitors and changing customer behavior, there is an urgent need for continuous analysis of this big data. While the speed of the Apriori algorithm decreases with increasing data volume. In this paper, the big data PCA method is used to reduce the dimension of the data in order to increase the speed of Apriori algorithm. Then, in the simulation section, the results are examined by generating data with different volumes and different diversity. The results show that when using this method, the speed of the a priori algorithm increases significantly.Keywords: association rules, Apriori algorithm, big data, big data PCA, market basket analysis
Procedia PDF Downloads 103728 ChatGPT
Authors: Solaf Badahman, Wala Alasbahi, Wajan Bamehraz, Hiba Nawwab
Abstract:
This research delves into ChatGPT, OpenAI’s leading conversational AI, exploring its journey from early language models to the cutting-edge GPT-4. A survey of 35 users highlights ChatGPT’s strengths in creative writing, summarization, and user engagement while revealing areas for enhancement, particularly in technical tasks. Through scenario-based testing and direct feedback, this study uncovers ChatGPT’s real-world impact, examining its accuracy, privacy, and versatility. Positioned in a competitive landscape, ChatGPT emerges as a powerful, evolving tool for education, creativity, and problem-solving. This research offers a concise snapshot of AI’s growing role in shaping the future of human-AI interaction.Keywords: AI, NLP, ChatGPT, conversational AI, human–AI interaction
Procedia PDF Downloads 123727 Analysis of Jenni: Essay Writing Artificial Intelligence
Authors: Joud Tayeb, Dunia Moussa, Rafal Al-Khawlani, Huda Elyas
Abstract:
This research delves into the intricate AI features of Jenni, an AI-powered chatbot designed to offer personalized and engaging conversations. We explore the fundamental technologies driving Jenni's capabilities, including natural language processing (NLP), machine learning, and deep learning. Through a meticulous analysis of these technologies, we aim to unravel how Jenni effectively processes and understands user queries, generates contextually relevant responses, and continuously learns from interactions. To gain deeper insights into user experiences and satisfaction, a comprehensive survey was conducted. By analyzing the collected data, we determine that consumers mostly like Jenni AI and reported that it has improved their essay writing process, yet the algorithm needs to improve certain aspects, such as accuracy.Keywords: natural language processing, machine learning, deep learning, artificial intelligence, Jenni
Procedia PDF Downloads 143726 Integrating Optuna and Synthetic Data Generation for Optimized Medical Transcript Classification Using BioBERT
Authors: Sachi Nandan Mohanty, Shreya Sinha, Sweeti Sah, Shweta Sharma
Abstract:
The advancement of natural language processing has majorly influenced the field of medical transcript classification, providing a robust framework for enhancing the accuracy of clinical data processing. It has enormous potential to transform healthcare and improve people's livelihoods. This research focuses on improving the accuracy of medical transcript categorization using Bidirectional Encoder Representations from Transformers (BERT) and its specialized variants, including BioBERT, ClinicalBERT, SciBERT, and BlueBERT. The experimental work employs Optuna, an optimization framework, for hyperparameter tuning to identify the most effective variant, concluding that BioBERT yields the best performance. Furthermore, various optimizers, including Adam, RMSprop, and Layerwise adaptive large batch optimization (LAMB), were evaluated alongside BERT's default AdamW optimizer. The findings show that the LAMB optimizer achieves a performance that is equally good as AdamW's. Synthetic data generation techniques from Gretel were utilized to augment the dataset, expanding the original dataset from 5,000 to 10,000 rows. Subsequent evaluations demonstrated that the model maintained its performance with synthetic data, with the LAMB optimizer showing marginally better results. The enhanced dataset and optimized model configurations improved classification accuracy, showcasing the efficacy of the BioBERT variant and the LAMB optimizer. It resulted in an accuracy of up to 98.2% and 90.8% for the original and combined datasets.Keywords: BioBERT, clinical data, healthcare AI, transformer models
Procedia PDF Downloads 93725 Grammarly: Great Writings Get Work Done Using AI
Authors: Neha Intikhab Khan, Alanoud AlBalwi, Farah Alqazlan, Tala Almadoudi
Abstract:
Background: Grammarly, a widely utilized writing assistant launched in 2009, leverages advanced artificial intelligence and natural language processing to enhance writing quality across various platforms. Methods: To collect data on user perceptions of Grammarly, a structured survey was designed and distributed via Google Forms. The survey included a series of quantitative and qualitative questions aimed at assessing various aspects of Grammarly's performance. The survey comprised multiple-choice questions, Likert scale items (ranging from "strongly disagree" to "strongly agree"), and open-ended questions to capture detailed user feedback. The target population included students, friends, and family members. The collected responses were analyzed using statistical methods to quantify user satisfaction. Participation in the survey was voluntary, and respondents were assured anonymity and confidentiality. Results: The survey of 28 respondents revealed a generally favorable perception of Grammarly's AI capabilities. A significant 39.3% strongly agreed that it effectively improves text tone, with an additional 46.4% agreeing, while 10.7% remained neutral. For clarity suggestions, 28.6% strongly agreed, and 57.1% agreed, totaling 85.7% recognition of its value. Regarding grammatical accuracy across various genres, 46.4% rated it a perfect score of 5, contributing to 78.5% who found it highly effective. Conclusion: The evolution of Grammarly from a basic grammar checker to a robust AI-driven application underscores its adaptability and commitment to helping users develop their writing skills.Keywords: Grammarly, writing tool, user engagement, AI capabilities, effectiveness
Procedia PDF Downloads 93724 Boots Chatbot: AI Virtual Customer Assistance Service
Authors: Ruba Bajri, Danah Bukhari, Ruba Tuhaif
Abstract:
This report delves into the application called chatbot and specifically the Boots chatbot, this tool uses artificial intelligence to assist customers with any inquiries about Boots products and services, any issues they're facing , or even just helping customers without the unnecessary wait time that comes with waiting for a customer representative. -powered chatbots are very innovative and have impacted the Boots business very positively by satisfying their customers' needs in real time. By using artificial intelligence, specifically advancements in natural language processing, chatbots are becoming more intuitive and understanding of what the customer needs. The report shows us the significance of the Boots chatbot and how it enhances customer service and support as they have instant answers to questions, help customers navigate the services, and can even personalize recommendations for the customers based on allergies or any past medical history all in real time as it's available 24/7. This is useful to everyone as customers can have their common questions answered immediately while leaving the more complex issues and matters to the human agents. We also reviewed the results of a survey conducted to assess public opinions on the chatbot which has provided us insight into the customer satisfaction levels, furthermore identifying any areas for potential improvement. Al chatbots are changing customer service for the better and improving customer experiences by making it more effective and efficient than ever.Keywords: boots chatbot, natural language processing, artificial intelligence, AI chatbots
Procedia PDF Downloads 93723 An Operational Model for eMarketing Technology Deployment in Higher Education in the UK
Authors: Amitave Banik
Abstract:
The terms “eMarketing,” “online marketing,” and “Internet marketing” are frequently interchanged and can often be considered synonymous. eMarketing technologies, tactics, tools and strategies can help UK universities to achieve potential competitive benefits. In UK universities, the uptake of eMarketing has been relatively limited, and the complexity of managing eMarketing has become more challenging. Many UK universities are only at an early stage of developing their online marketing capabilities and have not yet to identify their core digital marketing tools and techniques. This research investigates eMarketing adoption and deployment initiatives and provides insights into how to successfully develop and implement these initiatives in UK universities. Moreover, this research puts forward a provisional conceptual framework for eMarketing strategy implementation that relates strategy objectives and operational requirements to technology utilization. The research conducted the epistemological assumptions relate to “how things really are” and “how things really work” in an assumed reality. The methodological assumptions relate to the process of building the conceptual framework and assessing what it can provide about the “real” world. Based on the concept, the framework recognizes the various eMarketing channels, eMarketing techniques and eMarketing strategies that are used to reach the widest student base. A qualitative research method, based on narrative in-depth case studies, includes an empirical investigation at the University of Gloucestershire, University of Wales Trinity St David, University of Westminster, and London Metropolitan Business school. The selection of case/ university provides additional value because there is no previous study studied at this level. Questionnaires and semi-structured interviews have been conducted to gather data from selected universities’ academics and professional services staff. Narrative inquiry has been employed as a tool for analysis of conversations and interviews. Framework analysis used to identify common themes to build/ innovate an operational model from the original provisional conceptual framework. The proposed operational model will provide appropriate eMarketing strategies that create and sustain a competitive business development (business expansion and market growth). Besides, it will offer to one or several segments of customers and its network of partners for creating, marketing and building up relationships to generate profitable and sustainable revenue streams. In this context, the operational model will serve as an instructional-technological interactions roadmap, outlining essential components to guide the eMarketing technological deployment in UK universities.Keywords: eMarketing, digital technologies, marketing mix, eMarketing plan, strategies, tactics, conceptual framework, operational model, higher education organizations
Procedia PDF Downloads 83722 YOLO-Based Object Detection for the Automatic Classification of Intestinal Organoids
Authors: Luana Conte, Giorgio De Nunzio, Giuseppe Raso, Donato Cascio
Abstract:
The intestinal epithelium serves as a pivotal model for studying stem cell biology and diseases such as colorectal cancer. Intestinal epithelial organoids, which replicate many in vivo features of the intestinal epithelium, are increasingly used as research models. However, manual classification of organoids is labor-intensive and prone to subjectivity, limiting scalability. In this study, we developed an automated object-detection algorithm to classify intestinal organoids in transmitted-light microscopy images. Our approach utilizes the YOLOv10 medium model (YOLO10m), a state-of-the-art object-detection algorithm, to predict and classify objects within labeled bounding boxes. The model was fine-tuned on a publicly available dataset containing 840 manually annotated images with 23,066 total annotations, averaging 28.2 annotations per image (median: 21; range: 1–137). It was trained to identify four categories: cysts, early organoids, late organoids, and spheroids, using a 90:10 train-validation split over 150 epochs. Model performance was assessed using mean average precision (mAP), precision, and recall metrics. The mAP, a standard metric ranging from 0 to 1 (with 1 indicating perfect agreement with manual labeling), was calculated at a 50% overlap threshold (mAP=0.5). Optimal performance was achieved at epoch 80, with an mAP of 0.85, precision of 0.78, and recall of 0.80 on the validation dataset. Classspecific mAP values were highest for cysts (0.87), followed by late organoids (0.83), early organoids (0.76), and spheroids (0.68). Additionally, the model demonstrated the ability to measure organoid sizes and classify them with accuracy comparable to expert scientists, while operating significantly faster. This automated pipeline represents a robust tool for large-scale, high-throughput analysis of intestinal organoids, paving the way for more efficient research in organoid biology and related fields.Keywords: intestinal organoids, object detection, YOLOv10, transmitted-light microscopy
Procedia PDF Downloads 103721 Packet Analysis in Network Forensics: Insights, Tools, and Case Study
Authors: Dalal Nasser Fathi, Amal Saud Al-Mutairi, Mada Hamed Al-Towairqi, Enas Fawzi Khairallah
Abstract:
Network forensics is essential for investigating cyber incidents and detecting malicious activities by analyzing network traffic, with a focus on packet and protocol data. This process involves capturing, filtering, and examining network data to identify patterns and signs of attacks. Packet analysis, a core technique in this field, provides insights into the origins of data, the protocols used, and any suspicious payloads, which aids in detecting malicious activity. This paper explores network forensics, providing guidance for the analyst on what to look for and identifying attack sites guided by the seven layers of the OSI model. Additionally, it explains the most commonly used tools in network forensics and demonstrates a practical example using Wireshark.Keywords: network forensic, packet analysis, Wireshark tools, forensic investigation, digital evidence
Procedia PDF Downloads 143720 Artificial Intelligence Applications in Kahoot!
Authors: Jana, Walah, Salma, Dareen
Abstract:
This study looks at how the game-based learning platform Kahoot! has changed education, with a particular emphasis on how it incorporates artificial intelligence (AI). From humanly made questions to AI-driven features that improve the learning process, Kahoot! has changed since its 2013 introduction. The software successfully engages educators and students by delivering adaptive learning paths, regulating content, and offering individualized tests. This study also highlights the AI features of Kahoot! by contrasting it with comparable platforms like Quizizz, Socrative, Gimkit, and Nearpod. User satisfaction with Kahoot!'s "PDF to Story" and "Story Text Enhancer" functions ranges from moderate to high, according to a review of user input; yet, there are still issues with consistent accuracy and usability. The results demonstrate how AI can improve learning's effectiveness, adaptability, and interactivity while offering useful insights for educators and developers seeking to optimize educational tools.Keywords: PDF to story feature, story text enhancer, AI-driven learning, interactive content creation
Procedia PDF Downloads 123719 Exploring the Impact of AI Tools in Microsoft PowerPoint
Authors: Budoor Bujeir, Noor Alaidaros, Sultana Alsolami
Abstract:
This study investigates how AI tools in Microsoft PowerPoint, such as Designer and Translation, might improve the process of creating presentations. Thanks to its sophisticated AI features, PowerPoint has become a powerful tool for effectively creating high-quality presentations. Designed to maximize user experience, key features include multilingual translation, real-time collaboration, and design ideas. A mixed-method approach was used, combining hands-on demos of particular AI technologies with a questionnaire given to both inexperienced and seasoned users. The survey examined how often individuals used these features, how helpful they thought they were, and how much time they could save. The results show that although tools like Designer are not widely used, they are recognized for improving aesthetics and saving time. The accuracy and usefulness of translation technologies in multilingual environments received high ratings, emphasizing how they promote inclusive communication. The importance of incorporating AI into productivity software is highlighted by this study, opening the door to more approachable, effective, and captivating presentation workflows.Keywords: Microsoft PowerPoint, AI features, designer, translation, presentation tools, NLP
Procedia PDF Downloads 16