Abstracts | Computer and Systems Engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 433

World Academy of Science, Engineering and Technology

[Computer and Systems Engineering]

Online ISSN : 1307-6892

433 Evaluation of Hootsuite as a Social Media Management Tool.

Authors: Zaina Alem, Abrar Sufyan, Dana Madani

Abstract:

This paper inspect the features, utility, and user insight of Hootsuite, a leading social media management platform. Through a structured analysis and user survey, the study explores Hootsuite’s functionality, including post scheduling, analytics, and team collaboration. The methodology includes app walkthroughs and a survey of 47 respondents to evaluate ease of use and the key feature. Results show that 89.4% of users would recommend Hootsuite, and the majority find it user-friendly, especially valuing its monitoring and scheduling tools. The findings highlight Hootsuite’s advantage in supporting digital marketing strategies while identifying areas for potential improvement in user interface and clarity.

Keywords: hootsuite, digital marketing, post scheduling, team collaboration

Procedia PDF Downloads 6
432 A Modern Method for Secure Online Voting System Using Blockchain and RFID Technology

Authors: Ali El Ksimi

Abstract:

In the modern digital landscape, the integrity and security of voting processes are paramount. Traditional voting methods have faced numerous challenges, including fraud, lack of transparency, and administrative inefficiencies. As these issues become increasingly critical, there is a growing need for advanced solutions that can enhance the security and reliability of elections. Blockchain technology, with its decentralized architecture, immutable nature, and advanced cryptographic techniques, offers a robust framework for transforming the voting process. By integrating Radio Frequency Identification (RFID) technology, voter authentication can be further streamlined, ensuring the authenticity of each vote cast. This article presents a decentralized IoT-based online voting system that utilizes blockchain, RFID, and cryptography to create a secure, transparent, and user-friendly voting experience. The proposed decentralized application (DApp) leverages Ethereum's blockchain and cryptographic protocols to manage the entire voting lifecycle, ensuring that each vote is recorded securely and transparently. By employing RFID tags for voter identification, this solution mitigates the risks associated with traditional identification methods while enhancing the accessibility of the voting process. We discuss the technical architecture, cryptographic mechanisms, scalability, and security advantages of this approach alongside its potential limitations, such as the dependence on RFID infrastructure, blockchain transaction costs, and possible latency in large-scale elections. Additionally, we explore the challenges in implementing the system across different jurisdictions and the regulatory hurdles that might arise with such decentralized solutions. Ultimately, this solution aims to redefine electoral processes, promoting trust and participation in democratic governance.

Keywords: blockchain, RFID, radio frequency identification, authentication, security, IoT, Internet of Things.

Procedia PDF Downloads 15
431 Protection of a Channel Transmission Security System in IP Networks

Authors: Khireddine Abdelkrim, Trabelsi Oussama

Abstract:

The opportunity to migrate from traditional telephony to IP telephony offered several advantages for businesses and allowed them to benefit from new services such as videoconferencing and data transmission. Integrating these services into a single platform requires more security. In this work, we will present attacks that threaten VoIP, and detail some methods used. We will end with a description of best practices for securing voice over IP communications. The VoIP system uses the Internet, and particularly the IP protocol as well as its vulnerabilities. This article presents the implementation of a channel transmission security system over IP.

Keywords: VoIP, vulnerabilty, protocol, H323, SIP, IP network, mobile, channel

Procedia PDF Downloads 17
430 Measuring Cloud Computing Maturity in Public Sector: Key Dimensions and Factors

Authors: Nur Amie Ismail, Kamsuriah Ahmad, Fazlina Mohd Ali

Abstract:

The proliferation of cloud computing (CC) has catalyzed digital transformation within public sectors globally. Nonetheless, the deployment of CC is impeded by substantial impediments, notably the absence of a methodical maturity evaluation paradigm and insufficient oversight of implementation efficacy. These deficiencies precipitate critical issues, including security vulnerabilities, vendor lock-in, and suboptimal cloud adoption processes. A formalized maturity assessment is indispensable for enabling organizations to evaluate their existing capabilities, pinpoint areas necessitating enhancement, and refine cloud adoption strategies. This research employs the Technology-Organisation-Environment (TOE) framework to investigate the pivotal dimensions influencing CC maturity in the public sector. A qualitative methodological approach, utilizing semi-structured interviews with six CC experts, was adopted. Thematic analysis, conducted with NVivo software, identified three primary dimensions—Technology, Organisation, and Environment—encompassing 38 salient factors. The findings underscore that security, executive leadership commitment, technological compatibility, performance surveillance, and stakeholder assurance are among the most influential determinants in assessing maturity. Notwithstanding robust policy endorsement, CC implementation in the public sector continues to encounter several key challenges, including security considerations, a dearth of internal proficiency, and organizational acceptance. Prior investigations have demonstrated that numerous public sector entities remain in nascent stages of cloud technology adoption, failing to attain requisite levels of maturity. Many public organizations operate with antiquated infrastructure, thereby complicating the transition to cloud-based solutions. A primary contributor to this deficiency is the absence of a systematic framework or assessment model for quantifying CC maturity within governmental bodies. Without a lucid model, public sector organizations experience difficulty in appraising the effectiveness of their technological utilization and identifying areas for refinement. Furthermore, the lack of performance and progress monitoring in CC represents a substantial challenge. Many organizations lack an efficacious mechanism for measuring the performance of deployed cloud technologies, thereby hindering their capacity to improve decision-making and refine adoption strategies.

Keywords: cloud computing, maturity, performance measurement, public sector

Procedia PDF Downloads 19
429 Hybrid Cloud Strategies for the Public Sector: Benefits, Challenges and Emerging Directions

Authors: Noorbaiti Mahusin, Hasimi Sallehudin, Nurhizam Safie Mohd Satar

Abstract:

This study examines the hybrid cloud strategy in the public sector, emphasizing the benefits, challenges, and current development directions. Through comprehensive analysis, we identified multiple benefits of hybrid cloud, including cost efficiency, scalability, increased security, and fostering innovation. However, technical complexity, security concerns, cost management, and skill gaps are also discussed. This article also explores the latest technologies in hybrid cloud, such as edge computing and AI, as well as their potential in revolutionizing public sector operations. These findings underscore the need for infrastructure upgrades, human resource capability development, and continuous monitoring to leverage the benefits of hybrid cloud fully. This article provides practical guidance for the public sector in implementing an effective hybrid cloud strategy.

Keywords: cost efficiency, data security, hybrid cloud, latest technology, public sector

Procedia PDF Downloads 17
428 Bioinspired Gradient-Stiffness Fingertip for Enhanced Softness Discrimination in Vision-based Tactile Sensors

Authors: Lunwei Zhang, Zihao Wang, Tiemin Li, Yao Jiang

Abstract:

This paper theoretically reveals the limitations of existing vision-based tactile sensors using homogeneous elastomers in detecting the softness of homogeneous objects with low surface curvature. To address this issue, a bio-inspired gradient-stiffness elastomer design is proposed to enhance the sensitivity of vision-based tactile sensors (VBTS) in softness discrimination. The design features a three-layered structure characterized by a ‘stiff core and soft periphery,’ enabling the elastomer’s surface to undergo continuous and significant deformation upon contact with soft objects as force is applied, thereby providing sufficient information for perceiving the mechanical properties of the object. Two metrics, Contact Strain Rate (CSR) and Strain Rate Discriminability Index (SRDI), are proposed to quantitatively evaluate elastomer surface deformation post-contact and sensor sensitivity in distinguishing softness. Finite element simulations demonstrate that the gradient-stiffness elastomer significantly outperforms homogeneous elastomers, with CSR and SRDI improvements of 8.5 to 9.8 times. Experimental validation using Tac3D sensors further confirms the superior performance of the gradient-stiffness elastomer in distinguishing the softness of low-curvature homogeneous objects.

Keywords: softness discrimination, vision-based tactile sensors, bionic tactile sensor, gradient-stiffness elastomer

Procedia PDF Downloads 31
427 Enhancing Maintenance Efficiency Through Predictive Maintenance and Industry 4.0 Integration

Authors: Yuseni Ab Wahab, M. A. Burhanuddin

Abstract:

Predictive maintenance (PdM) is an innovative maintenance strategy that leverages data analytics and advanced technology to predict equipment failures before they occur. This paper explores how PdM works, highlighting the crucial role of sensors and tools that form the foundation of the strategy. It also examines the limitations of PdM and emphasizes the importance of reliability-centered maintenance (RCM). The benefits of PdM include reduced downtime, lower operational costs, and improved efficiency. However, the paper also identifies key challenges, including concerns about data quality, the need for initial investment, and the necessity of training skilled personnel. Looking ahead, the paper discusses how PdM will integrate with Industry 4.0, driving a manufacturing revolution through the adoption of smart factories and digital twin technology. The proposed study concludes that PdM will expand its impact into sectors such as healthcare, transportation, and energy, ensuring the reliability and performance of critical systems across these industries.

Keywords: predictive maintenance, industry 4.0, reliability-centered maintenance, optimization

Procedia PDF Downloads 19
426 Optimizing Security Operations using SIEM and SOAR: Unified Threat Intelligence Across Multi-Cloud and On-Premises Systems

Authors: Charan Shankar Kummarapurugu

Abstract:

This paper introduces a framework designed to integrate threat intelligence with Security Information and Event Management This paper introduces a framework designed to integrate threat intelligence with Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) systems across multi-cloud, hybrid cloud, and on-premises environments. By merging threat intelligence feeds with SIEM and SOAR functionalities, the architecture aims to improve threat detection, streamline incident response, and enhance automation. Experimental findings show notable improvements in response times and threat visibility, providing an effective strategy for managing security threats across various infrastructure models.

Keywords: threat intelligence, SIEM, SOAR, multi-cloud, hybrid cloud, on-premises, security, incident response, automation.

Procedia PDF Downloads 19
425 From Archisculpture to Generative Art

Authors: Katherine Lapierre

Abstract:

It is in relation with the notions of Art Brut that the research and creation projects focus on the notion of outsider architecture, a direct reference to the expression “Outsider Art”. This term was used for the first time by Roger Cardinal, to translate “Art Brut,” which was defined by Jean Dubuffet. Centered on the foundations of archisculpture, the research confronts the idea of integrating the arts with architecture, while going beyond the schema of a simple dialogue between the artwork and the architectural space. Therefore, the material exploration of notions located at the frontier of the fields of art and architecture is propose. The program also falls within the field of “architecture autre” (“other” architecture), coined by Reyner Banham along with the definition of New Brutalism, which corresponds to an architecture that excludes any historical or conventional cultural references: it abandons all concepts of composition, symmetry, order, module, and proportions. Also related to “Art Autre” (“Art of Another Kind”) which was defined by Michel Tapié and Art Brut, this architecture was to exclude any monumental symbolism and exceed "the norms of its expression with as much vehemence as Dubuffet's paintings exceeded the standards of painting.” If the connections between Art Brut and architecture are sometimes tenuous, they revive theories supplanted by rationalism. Jean Dubuffet’s constructions, built on a monumental scale from his sculptures and drawings, are examples that recall certain notions of Art Brut applied to architecture and design. Let’s name, for example, the Closerie Falbala in Périgny, Val-de-Marne, France, and the Tour aux figures in Issy-les-Moulineaux, France. Although they cannot be considered as Art Brut productions per se, they provide strong glimpses of the potential of Art Brut notions for architecture. The project thus combines models based on scanning, digitization, and software calculation. The printed supports become integrated elements of the structural system, architectural elements that can adopt myriad shapes) before being reintegrated into the models. Through this process, it also becomes possible to imagine how the printed object would look at full scale without support. The walls of the building result from the aligned supports from 3D printing and were progressively adjusted through a back-and-forth process between printing, scanning, and modeling. With emphasis on the digital representation of such architectures, the research reveals the manifold possibilities offered by exchange and dialogue between the disciplines of drawing, sculpture, design and architecture. The projects presented in this paper will be shown as part of the Habitat outsider exhibition at the Montcalm gallery in Gatineau (Quebec) in the fall of 2025. All along this work, the links created between sculpture and architecture propose a critical and poetic look at the themes of creation and habitability. Through the hybridization of techniques and digital technologies in the creation of the prototypes inspired by outsider architectures, new axes of research regularly emerge, new possibilities constantly appear. The unexpected results clearly illustrate the richness and potential of these ventures that constantly challenge the boundaries of traditional architecture and sculpture.

Keywords: architecture, Art Brut, Generative art, archisculpture

Procedia PDF Downloads 20
424 TraceBFT: Enabling Backtracking-Based Pipelined Asynchronous BFT Consensus for Throughput Improvement

Authors: Chaofeng Zhuang, Junqing Gong, Zhili Chen, Haifeng Qian

Abstract:

Network attacks threaten the security and efficiency of synchronous/partially synchronous BFT protocols, driving the need for robust asynchronous solutions. HoneyBadgerBFT pioneered practical asynchronous BFT using reliable broadcast (RB) and asynchronous binary agreement (ABA). Dumbo reduced ABA instances, and sDumbo lowered latency to six steps with leader election in its multi-valued Byzantine agreement (sMVBA). However, re-tosses due to coin value being different from the bit known to most nodes in ABA and view re-executions due to leader crashes in sDumbo lead to additional communication costs. In addition, these traditional asynchronous BFT protocols process data serially, limiting system throughput. To address these issues, this paper proposes trackMVBA, an MVBA protocol based on backtracking technique. The trackMVBA protocol transmits transactions and vectors via provable broadcast (PB), where vectors link multiple transactions/ vectors. Nodes decide vectors by backtracking, packing linked transactions into blocks, and reaching consensus in only seven steps in optimal cases. To improve throughput, this paper uses trackMVBA to construct TraceBFT, a pipeline-based asynchronous BFT protocol that enables parallel view execution and avoids re-execution costs when the leader crashes. Analysis confirms TraceBFT’s security and liveness in asynchronous networks and simulations show that TraceBFT enhances system throughput while keeping latency low.

Keywords: asynchronous protocol, backtrack, broadcast, Byzantine fault tolerant, pipeline

Procedia PDF Downloads 15
423 Exploring the Role of IPv6 in Enhancing IoT Communication and Green Network Optimization for Business Sustainability

Authors: Saqib Warsi

Abstract:

The Internet of Things (IoT) has become an essential component of modern communication networks, with IPv6 playing a pivotal role in addressing the challenges posed by the rapidly growing number of connected devices. IPv6 provides an expanded address space, offering a solution to the limitations of IPv4 while enhancing routing efficiency and security. This paper explores the impact of IPv6 in improving IoT communication, focusing on its operational benefits for businesses. Additionally, we examine the integration of green communication principles, which aim to reduce energy consumption and operational costs, thus promoting environmental sustainability and business efficiency. Through qualitative analysis and simulation-based modeling, this paper investigates the benefits of IPv6 in IoT environments and evaluates the role of green communication strategies in optimizing network performance. Traffic measurement tools and network performance simulators were employed to analyze the efficiency, sustainability, and scalability of IPv6 networks. By presenting a comprehensive framework for traffic analysis, modeling, and optimization, this research highlights the potential of combining IPv6 and green communication practices to drive business growth while promoting environmental sustainability. The findings provide valuable insights for businesses adopting more sustainable and efficient communication networks.

Keywords: IPv6, Internet of Things (IoT), green communications, traffic measurement and modeling, network virtualization

Procedia PDF Downloads 30
422 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI

Authors: Zahra Alipour, Amirreza Moheb Afzali

Abstract:

In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.

Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)

Procedia PDF Downloads 33
421 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model

Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh

Abstract:

Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.

Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding

Procedia PDF Downloads 39
420 Heuristic of Style Transfer for Real-Time Detection or Classification of Weather Conditions from Camera Images

Authors: Hamed Ouattara, Pierre Duthon, Frédéric Bernardin, Omar Ait Aider, Pascal Salmane

Abstract:

In this article, we present three neural network architectures for real-time classification of weather conditions (sunny, rainy, snowy, foggy) from images. Inspired by recent advances in style transfer, two of these architectures -Truncated ResNet50 and Truncated ResNet50 with Gram Matrix and Attention- surpass the state of the art and demonstrate re-markable generalization capability on several public databases, including Kaggle (2000 images), Kaggle 850 images, MWI (1996 images) [1], and Image2Weather [2]. Although developed for weather detection, these architectures are also suitable for other appearance-based classification tasks, such as animal species recognition, texture classification, disease detection in medical images, and industrial defect identification. We illustrate these applications in the section “Applications of Our Models to Other Tasks” with the “SIIM-ISIC Melanoma Classification Challenge 2020” [3].

Keywords: weather simulation, weather measurement, weather classification, weather detection, style transfer, Pix2Pix, CycleGAN, CUT, neural style transfer

Procedia PDF Downloads 45
419 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework

Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.

Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles

Procedia PDF Downloads 63
418 A Comparative Analysis of a Custom Optimization Experiment with Confidence Intervals in Anylogic and Optquest

Authors: Felipe Haro, Soheila Antar

Abstract:

This paper introduces a custom optimization experiment developed in AnyLogic, based on genetic algorithms, designed to ensure reliable optimization results by incorporating Montecarlo simulations and achieving a specified confidence level. To validate the custom experiment, we compared its performance with AnyLogic's built-in OptQuest optimization method across three distinct problems. Statistical analyses, including Welch's t-test, were conducted to assess the differences in performance. The results demonstrate that while the custom experiment shows advantages in certain scenarios, both methods perform comparably in others, confirming the custom approach as a reliable and effective tool for optimization under uncertainty.

Keywords: optimization, confidence intervals, Montecarlo simulation, optQuest, AnyLogic

Procedia PDF Downloads 40
417 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis

Authors: Maher Ali Rusho, Sudipta Halder

Abstract:

The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.

Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.

Procedia PDF Downloads 39
416 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation

Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang

Abstract:

Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.

Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven

Procedia PDF Downloads 40
415 Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review

Authors: Agastya Pratap Singh

Abstract:

Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices.

Keywords: generative adversarial networks, image recognition, anomaly detection, DCGAN, CycleGAN, StyleGAN, data augmentation

Procedia PDF Downloads 42
414 Efficient Ground Targets Detection Using Compressive Sensing in Ground-Based Synthetic-Aperture Radar (SAR) Images

Authors: Gherbi Nabil

Abstract:

Detection of ground targets in SAR radar images is an important area for radar information processing. In the literature, various algorithms have been discussed in this context. However, most of them are of low robustness and accuracy. To this end, we discuss target detection in SAR images based on compressive sensing. Firstly, traditional SAR image target detection algorithms are discussed, and their limitations are highlighted. Secondly, a compressive sensing method is proposed based on the sparsity of SAR images. Next, the detection problem is solved using Multiple Measurements Vector configuration. Furthermore, a robust Alternating Direction Method of Multipliers (ADMM) is developed to solve the optimization problem. Finally, the detection results obtained using raw complex data are presented. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.

Keywords: compressive sensing, raw complex data, synthetic aperture radar, ADMM

Procedia PDF Downloads 44
413 Scalable Performance Testing: Facilitating the Assessment of Application Performance Under Substantial Loads and Mitigating the Risk of System Failures

Authors: Solanki Ravirajsinh

Abstract:

In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.

Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services

Procedia PDF Downloads 47
412 Unlocking E-commerce: Analyzing User Behavior and Segmenting Customers for Strategic Insights

Authors: Aditya Patil, Arun Patil, Vaishali Patil, Sudhir Chitnis, Anjum Patel

Abstract:

Rapid growth has given e-commerce platforms a lot of client behavior and spending data. To maximize their strategy, businesses must understand how customers utilize online shopping platforms and what influences their purchases. Our research focuses on e-commerce user behavior and purchasing trends. This extensive study examines spending and user behavior. Regression and grouping disclose relevant data from the dataset. We can understand user spending trends via multilevel regression. We can analyze how pricing, user demographics, and product categories affect customer purchase decisions with this technique. Clustering groups consumers by spending. Important information was found. Purchase habits vary by user group. Our analysis illuminates the complex world of e-commerce consumer behavior and purchase trends. Understanding user behavior helps create effective e-commerce marketing strategies. This market can benefit from K-means clustering. This study focuses on tailoring strategies to user groups and improving product and price effectiveness. Customer buying behaviors across categories were shown via K-means clusters. Average spending is highest in Cluster 4 and lowest in Cluster 3. Clothing is less popular than gadgets and appliances around the holidays. Cluster spending distribution is examined using average variables. Our research enhances e-commerce analytics. Companies can improve customer service and decision-making with this data.

Keywords: e-commerce, regression, clustering, k-means

Procedia PDF Downloads 44
411 Dependence of the Structural, Electrical and Magnetic Properties of YBa2Cu3O7−δ Bulk Superconductor on the Sm Doping

Authors: Raheleh Hajilou

Abstract:

In this study, we report the synthesis and characterization of YBa2Cu3O7-δ (YBCO) high-temperature superconductor prepared by solid-state method and doped with Sm in different weight percentages, 0, 0.01, 0.02 and 0.05 wt. The result of X-ray diffraction (XRD) analysis conforms to the formation of an orthorhombic phase of superconductivity in our samples. This is an important finding and indicates that the samples may exhibit superconductivity properties at certain conditions. Our results unequivocally point to a different structural order or disorder in SM/Y samples as compared to Sm based samples. We suggest that different site preferences of oxygen vacancies, predominantly created in CuO2 planes (CuO chains) of Y and Sm-based samples, might be responsible for the observed difference in the behavior. This contention is supported by a host of other considerations and experimental observations. The study investigated the effects of Sm doping on the YBCO system on various properties such as structural, critical temperature (Tc), scanning electron microscope (SEM), irresistibility line(IL), critical current density, jc, and flux line pinning force. It Seems the sample x=0.05 undergoes an insulator transition, which suppresses its superconducting transition temperature (Tc). Additionally, magnetization was measured as a function of temperature (M-T) and magnetic loops (M-H) at constant temperatures of 10. 20, 30, 40, 50 and 60K up to 10KG.

Keywords: high-Tc superconductors, Scanning electron microscopy, X-ray scattering, Irreversibility line

Procedia PDF Downloads 47
410 The Effect of Artificial Intelligence on Communication and Information Systems

Authors: Sameh Ibrahim Ghali Hanna

Abstract:

Information system (IS) are fairly crucial in the operation of private and public establishments in growing and developed international locations. Growing countries are saddled with many project failures throughout the implementation of records systems. However, successful information systems are greatly wished for in developing nations in an effort to decorate their economies. This paper is extraordinarily critical in view of the high failure fee of data structures in growing nations, which desire to be decreased to minimal proper levels by means of advocated interventions. This paper centers on a review of IS development in developing international locations. The paper gives evidence of the IS successes and screw-ups in developing nations and posits a version to deal with the IS failures. The proposed model can then be utilized by means of growing nations to lessen their IS mission implementation failure fee. A contrast is drawn between IS improvement in growing international locations and evolved international locations. The paper affords valuable records to assist in decreasing IS failure, and growing IS models and theories on IS development for developing countries.

Keywords: research information systems (RIS), research information, heterogeneous sources, data quality, data cleansing, science system, standardization artificial intelligence, AI, enterprise information system, EIS, integration developing countries, information systems, IS development, information systems failure, information systems success, information systems success model

Procedia PDF Downloads 43
409 Digital Media Market, Multimedia, and Computer Graphic Analysis Amidst Fluctuating Global and Local Scale Economy

Authors: Essang Anwana Onuntuei, Chinyere Blessing Azunwoke

Abstract:

The study centred on investigating the influence of multimedia systems and computer graphic design on global and local scale economies. Firstly, the study pinpointed the significant participants and top five global digital media distribution in the digital media market. Then, the study investigated whether a tie or variance existed between the digital media vendor and market shares. Also, the paper probed whether the global and local desktop, mobile, and tablet markets differ while assessing the association between the top five digital media and global market shares. Finally, the study explored the extent of growth, economic gains, major setbacks, and opportunities within the industry amidst global and local scale economic flux. A multiple regression analysis method was employed to analyse the significant influence of the top five global digital media on the total market share, and the Analysis of Variance (ANOVA) was used to analyse the global digital media vendor market share data. The findings were intriguing and significant.

Keywords: computer graphics, digital media market, global market share, market size, media vendor, multimedia, social media, systems design

Procedia PDF Downloads 61
408 Impact Assessment of Information Communication, Network Providers, Teledensity, and Consumer Complaints on Gross Domestic Products

Authors: Essang Anwana Onuntuei, Chinyere Blessing Azunwoke

Abstract:

The study used secondary data from foreign and local organizations to explore major challenges and opportunities abound in Information Communication. The study aimed at exploring the tie between tele density (network coverage area) and the number of network subscriptions, probing if the degree of consumer complaints varies significantly among network providers, and assessing if network subscriptions do significantly influence the sector’s GDP contribution. Methods used for data analysis include Pearson product-moment correlation and regression analysis, and the Analysis of Variance (ANOVA) as well. At a two-tailed test of 0.05 confidence level, the results of findings established about 85.6% of network subscriptions were explained by tele density (network coverage area), and the number of network subscriptions; Consumer Complaints’ degree varied significantly among network providers as 80.158291 (F calculated) > 3.490295 (F critical) with very high confidence associated p-value = 0.000000 which is < 0.05; and finally, 65% of the nation’s GDP was explained by network subscription to show a high association.

Keywords: tele density, subscription, network coverage, information communication, consumer

Procedia PDF Downloads 75
407 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop

Authors: A. Legrand, S. Bornhofen

Abstract:

Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.

Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.

Procedia PDF Downloads 72
406 Drive Sharing with Multimodal Interaction: Enhancing Safety and Efficiency

Authors: Sagar Jitendra Mahendrakar

Abstract:

Exploratory testing is a dynamic and adaptable method of software quality assurance that is frequently praised for its ability to find hidden flaws and improve the overall quality of the product. Instead of using preset test cases, exploratory testing allows testers to explore the software application dynamically. This is in contrast to scripted testing methodologies, which primarily rely on tester intuition, creativity, and adaptability. There are several tools and techniques that can aid testers in the exploratory testing process which we will be discussing in this talk.Tests of this kind are able to find bugs of this kind that are harder to find during structured testing or that other testing methods may have overlooked.The purpose of this abstract is to examine the nature and importance of exploratory testing in modern software development methods. It explores the fundamental ideas of exploratory testing, highlighting the value of domain knowledge and tester experience in spotting possible problems that may escape the notice of traditional testing methodologies. Throughout the software development lifecycle, exploratory testing promotes quick feedback loops and continuous improvement by giving testers the ability to make decisions in real time based on their observations. This abstract also clarifies the unique features of exploratory testing, like its non-linearity and capacity to replicate user behavior in real-world settings. Testers can find intricate bugs, usability problems, and edge cases in software through impromptu exploration that might go undetected. Exploratory testing's flexible and iterative structure fits in well with agile and DevOps processes, allowing for a quicker time to market without sacrificing the quality of the final product.

Keywords: exploratory, testing, automation, quality

Procedia PDF Downloads 72
405 Threshold (K, P) Quantum Distillation

Authors: Shashank Gupta, Carlos Cid, William John Munro

Abstract:

Quantum distillation is the task of concentrating quantum correlations present in N imperfect copies to M perfect copies (M < N) using free operations by involving all P the parties sharing the quantum correlation. We present a threshold quantum distillation task where the same objective is achieved but using lesser number of parties (K < P). In particular, we give an exact local filtering operations by the participating parties sharing high dimension multipartite entangled state to distill the perfect quantum correlation. Later, we bridge a connection between threshold quantum entanglement distillation and quantum steering distillation and show that threshold distillation might work in the scenario where general distillation protocol like DEJMPS does not work.

Keywords: quantum networks, quantum distillation, quantum key distribution, entanglement distillation

Procedia PDF Downloads 66
404 Parallel Computing: Offloading Matrix Multiplication to GPU

Authors: Bharath R., Tharun Sai N., Bhuvan G.

Abstract:

This project focuses on developing a Parallel Computing method aimed at optimizing matrix multiplication through GPU acceleration. Addressing algorithmic challenges, GPU programming intricacies, and integration issues, the project aims to enhance efficiency and scalability. The methodology involves algorithm design, GPU programming, and optimization techniques. Future plans include advanced optimizations, extended functionality, and integration with high-level frameworks. User engagement is emphasized through user-friendly interfaces, open- source collaboration, and continuous refinement based on feedback. The project's impact extends to significantly improving matrix multiplication performance in scientific computing and machine learning applications.

Keywords: matrix multiplication, parallel processing, cuda, performance boost, neural networks

Procedia PDF Downloads 76