World Academy of Science, Engineering and Technology
[Computer and Systems Engineering]
Online ISSN : 1307-6892
361 Deployed Confidence: The Testing in Production
Authors: Shreya Asthana
Abstract:
Testers know that the feature they tested on stage is working perfectly in production only after release went live. Sometimes something breaks in production and testers get to know through the end user’s bug raised. The panic mode starts when your staging test results do not reflect current production behavior. And you started doubting your testing skills when finally the user reported a bug to you. Testers can deploy their confidence on release day by testing on production. Once you start doing testing in production, you will see test result accuracy because it will be running on real time data and execution will be a little faster as compared to staging one due to elimination of bad data. Feature flagging, canary releases, and data cleanup can help to achieve this technique of testing. By this paper it will be easier to understand the steps to achieve production testing before making your feature live, and to modify IT company’s testing procedure, so testers can provide the bug free experience to the end users. This study is beneficial because too many people think that testing should be done in staging but not in production and now this is high time to pull out people from their old mindset of testing into a new testing world. At the end of the day, it all just matters if the features are working in production or not.Keywords: bug free production, new testing mindset, testing strategy, testing approach
Procedia PDF Downloads 75360 AutoML: Comprehensive Review and Application to Engineering Datasets
Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili
Abstract:
The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.Keywords: automated machine learning, uncertainty, engineering dataset, regression
Procedia PDF Downloads 60359 AI-Based Information System for Hygiene and Safety Management of Shared Kitchens
Authors: Jongtae Rhee, Sangkwon Han, Seungbin Ji, Junhyeong Park, Byeonghun Kim, Taekyung Kim, Byeonghyeon Jeon, Jiwoo Yang
Abstract:
The shared kitchen is a concept that transfers the value of the sharing economy to the kitchen. It is a type of kitchen equipped with cooking facilities that allows multiple companies or chefs to share time and space and use it jointly. These shared kitchens provide economic benefits and convenience, such as reduced investment costs and rent, but also increase the risk of safety management, such as cross-contamination of food ingredients. Therefore, to manage the safety of food ingredients and finished products in a shared kitchen where several entities jointly use the kitchen and handle various types of food ingredients, it is critical to manage followings: the freshness of food ingredients, user hygiene and safety and cross-contamination of cooking equipment and facilities. In this study, it propose a machine learning-based system for hygiene safety and cross-contamination management, which are highly difficult to manage. User clothing management and user access management, which are most relevant to the hygiene and safety of shared kitchens, are solved through machine learning-based methodology, and cutting board usage management, which is most relevant to cross-contamination management, is implemented as an integrated safety management system based on artificial intelligence. First, to prevent cross-contamination of food ingredients, we use images collected through a real-time camera to determine whether the food ingredients match a given cutting board based on a real-time object detection model, YOLO v7. To manage the hygiene of user clothing, we use a camera-based facial recognition model to recognize the user, and real-time object detection model to determine whether a sanitary hat and mask are worn. In addition, to manage access for users qualified to enter the shared kitchen, we utilize machine learning based signature recognition module. By comparing the pairwise distance between the contract signature and the signature at the time of entrance to the shared kitchen, access permission is determined through a pre-trained signature verification model. These machine learning-based safety management tasks are integrated into a single information system, and each result is managed in an integrated database. Through this, users are warned of safety dangers through the tablet PC installed in the shared kitchen, and managers can track the cause of the sanitary and safety accidents. As a result of system integration analysis, real-time safety management services can be continuously provided by artificial intelligence, and machine learning-based methodologies are used for integrated safety management of shared kitchens that allows dynamic contracts among various users. By solving this problem, we were able to secure the feasibility and safety of the shared kitchen business.Keywords: artificial intelligence, food safety, information system, safety management, shared kitchen
Procedia PDF Downloads 67358 Scheduling in Cloud Networks Using Chakoos Algorithm
Authors: Masoumeh Ali Pouri, Hamid Haj Seyyed Javadi
Abstract:
Nowadays, cloud processing is one of the important issues in information technology. Since scheduling of tasks graph is an NP-hard problem, considering approaches based on undeterminisitic methods such as evolutionary processing, mostly genetic and cuckoo algorithms, will be effective. Therefore, an efficient algorithm has been proposed for scheduling of tasks graph to obtain an appropriate scheduling with minimum time. In this algorithm, the new approach is based on making the length of the critical path shorter and reducing the cost of communication. Finally, the results obtained from the implementation of the presented method show that this algorithm acts the same as other algorithms when it faces graphs without communication cost. It performs quicker and better than some algorithms like DSC and MCP algorithms when it faces the graphs involving communication cost.Keywords: cloud computing, scheduling, tasks graph, chakoos algorithm
Procedia PDF Downloads 63357 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 102356 Software Verification of Systematic Resampling for Optimization of Particle Filters
Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey
Abstract:
Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking
Procedia PDF Downloads 83355 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 67354 Mastering Test Automation: Bridging Gaps for Seamless QA
Authors: Rohit Khankhoje
Abstract:
The rapid evolution of software development practices has given rise to an increasing demand for efficient and effective test automation. The paper titled "Mastering Test Automation: Bridging Gaps for Seamless QA" delves into the crucial aspects of test automation, addressing the obstacles faced by organizations in achieving flawless quality assurance. The paper highlights the importance of bridging knowledge gaps within organizations, emphasizing the necessity for management to acquire a deeper comprehension of test automation scenarios, coverage, report trends, and the importance of communication. To tackle these challenges, this paper introduces innovative solutions, including the development of an automation framework that seamlessly integrates with test cases and reporting tools like TestRail and Jira. This integration facilitates the automatic recording of bugs in Jira, enhancing bug reporting and communication between manual QA and automation teams as well as TestRail have all newly added automated testcases as soon as it is part of the automation suite. The paper demonstrates how this framework empowers management by providing clear insights into ongoing automation activities, bug origins, trend analysis, and test case specifics. "Mastering Test Automation" serves as a comprehensive guide for organizations aiming to enhance their quality assurance processes through effective test automation. It not only identifies the common pitfalls and challenges but also offers practical solutions to bridge the gaps, resulting in a more streamlined and efficient QA process.Keywords: automation framework, API integration, test automation, test management tools
Procedia PDF Downloads 72353 Mapping Stress in Submerged Aquatic Vegetation Using Multispectral Imagery and Structure from Motion Photogrammetry
Authors: Amritha Nair, Fleur Visser, Ian Maddock, Jonas Schoelynck
Abstract:
Inland waters such as streams sustain a rich variety of species and are essentially hotspots for biodiversity. Submerged aquatic vegetation, also known as SAV, forms an important part of ecologically healthy river systems. Direct and indirect human influences, such as climate change are putting stress on aquatic plant communities, ranging from the invasion of non-native species and grazing, to changes in the river flow conditions and temperature. There is a need to monitor SAV, because they are in a state of deterioration and their disappearance will greatly impact river ecosystems. Like terrestrial plants, SAV can show visible signs of stress. However, the techniques used to map terrestrial vegetation from its spectral reflectance, are not easily transferable to a submerged environment. Optical remote sensing techniques are employed to detect the stress from remotely sensed images through multispectral imagery and Structure from Motion photogrammetry. The effect of the overlying water column in the form of refraction, attenuation of visible and near infrared bands in water, as well as highly moving targets, are NIR) key challenges that arise when remotely mapping SAV. This study looks into the possibility of mapping the changes in spectral signatures from SAV and their response to certain stresses.Keywords: submerged aquatic vegetation, structure from motion, photogrammetry, multispectral, spectroscopy
Procedia PDF Downloads 99352 A Systematic Literature Review on Security and Privacy Design Patterns
Authors: Ebtehal Aljedaani, Maha Aljohani
Abstract:
Privacy and security patterns are both important for developing software that protects users' data and privacy. Privacy patterns are designed to address common privacy problems, such as unauthorized data collection and disclosure. Security patterns are designed to protect software from attack and ensure reliability and trustworthiness. Using privacy and security patterns, software engineers can implement security and privacy by design principles, which means that security and privacy are considered throughout the software development process. These patterns are available to translate "security & privacy-by-design" into practical advice for software engineering. Previous research on privacy and security patterns has typically focused on one category of patterns at a time. This paper aims to bridge this gap by merging the two categories and identifying their similarities and differences. To do this, the authors conducted a systematic literature review of 25 research papers on privacy and security patterns. The papers were analysed based on the category of the pattern, the classification of the pattern, and the security requirements that the pattern addresses. This paper presents the results of a comprehensive review of privacy and security design patterns. The review is intended to help future IT designers understand the relationship between the two types of patterns and how to use them to design secure and privacy-preserving software. The paper provides a clear classification of privacy and security design patterns, along with examples of each type. The authors found that there is only one widely accepted classification of privacy design patterns, while there are several competing classifications of security design patterns. Three types of security design patterns were found to be the most commonly used.Keywords: design patterns, security, privacy, classification of patterns, security patterns, privacy patterns
Procedia PDF Downloads 131351 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 136350 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 66349 Agent-Base Modeling of IoT Applications by Using Software Product Line
Authors: Asad Abbas, Muhammad Fezan Afzal, Muhammad Latif Anjum, Muhammad Azmat
Abstract:
The Internet of Things (IoT) is used to link up real objects that use the internet to interact. IoT applications allow handling and operating the equipment in accordance with environmental needs, such as transportation and healthcare. IoT devices are linked together via a number of agents that act as a middleman for communications. The operation of a heat sensor differs indoors and outside because agent applications work with environmental variables. In this article, we suggest using Software Product Line (SPL) to model IoT agents and applications' features on an XML-based basis. The contextual diversity within the same domain of application can be handled, and the reusability of features is increased by XML-based feature modelling. For the purpose of managing contextual variability, we have embraced XML for modelling IoT applications, agents, and internet-connected devices.Keywords: IoT agents, IoT applications, software product line, feature model, XML
Procedia PDF Downloads 93348 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks
Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam
Abstract:
In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion
Procedia PDF Downloads 122347 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET
Authors: Tyler T. Procko, Steve Collins
Abstract:
New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.Keywords: API data access, database, JSON, .NET core, SQL server
Procedia PDF Downloads 65346 Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach
Authors: Abe Degale D., Cheng Jian
Abstract:
When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively.Keywords: violence detection, faster RCNN, transfer learning and, surveillance video
Procedia PDF Downloads 103345 A Brave New World of Privacy: Empirical Insights into the Metaverse’s Personalization Dynamics
Authors: Cheng Xu
Abstract:
As the metaverse emerges as a dynamic virtual simulacrum of reality, its implications on user privacy have become a focal point of interest. While previous discussions have ventured into metaverse privacy dynamics, a glaring empirical gap persists, especially concerning the effects of personalization in the context of news recommendation services. This study stands at the forefront of addressing this void, meticulously examining how users' privacy concerns shift within the metaverse's personalization context. Through a pre-registered randomized controlled experiment, participants engaged in a personalization task across both the metaverse and traditional online platforms. Upon completion of this task, a comprehensive news recommendation service provider offers personalized news recommendations to the users. Our empirical findings reveal that the metaverse inherently amplifies privacy concerns compared to traditional settings. However, these concerns are notably mitigated when users have a say in shaping the algorithms that drive these recommendations. This pioneering research not only fills a significant knowledge gap but also offers crucial insights for metaverse developers and policymakers, emphasizing the nuanced role of user input in shaping algorithm-driven privacy perceptions.Keywords: metaverse, privacy concerns, personalization, digital interaction, algorithmic recommendations
Procedia PDF Downloads 116344 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach
Authors: Theertha Chandroth
Abstract:
This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.Keywords: XML, JSON, data comparison, integration testing, Python, SQL
Procedia PDF Downloads 139343 Detecting the Palaeochannels Based on Optical Data and High-Resolution Radar Data for Periyarriver Basin
Authors: S. Jayalakshmi, Gayathri S., Subiksa V., Nithyasri P., Agasthiya
Abstract:
Paleochannels are the buried part of an active river system which was separated from the active river channel by the process of cutoff or abandonment during the dynamic evolution of the active river. Over time, they are filled by young unconsolidated or semi-consolidated sediments. Additionally, it is impacted by geo morphological influences, lineament alterations, and other factors. The primary goal of this study is to identify the paleochannels in Periyar river basin for the year 2023. Those channels has a high probability in the presence of natural resources, including gold, platinum,tin,an duranium. Numerous techniques are used to map the paleochannel. Using the optical data, Satellite images were collected from various sources, which comprises multispectral satellite images from which indices such as Normalized Difference Vegetation Index (NDVI),Normalized Difference Water Index (NDWI), Soil Adjusted Vegetative Index (SAVI) and thematic layers such as Lithology, Stream Network, Lineament were prepared. Weights are assigned to each layer based on its importance, and overlay analysis has done, which concluded that the northwest region of the area has shown some paleochannel patterns. The results were cross-verified using the results obtained using microwave data. Using Sentinel data, Synthetic Aperture Radar (SAR) Image was extracted from European Space Agency (ESA) portal, pre-processed it using SNAP 6.0. In addition to that, Polarimetric decomposition technique has incorporated to detect the paleochannels based on its scattering property. Further, Principal component analysis has done for enhanced output imagery. Results obtained from optical and microwave radar data were compared and the location of paleochannels were detected. It resulted six paleochannels in the study area out of which three paleochannels were validated with the existing data published by Department of Geology and Environmental Science, Kerala. The other three paleochannels were newly detected with the help of SAR image.Keywords: paleochannels, optical data, SAR image, SNAP
Procedia PDF Downloads 89342 Determining the Most Efficient Test Available in Software Testing
Authors: Qasim Zafar, Matthew Anderson, Esteban Garcia, Steven Drager
Abstract:
Software failures can present an enormous detriment to people's lives and cost millions of dollars to repair when they are unexpectedly encountered in the wild. Despite a significant portion of the software development lifecycle and resources are dedicated to testing, software failures are a relatively frequent occurrence. Nevertheless, the evaluation of testing effectiveness remains at the forefront of ensuring high-quality software and software metrics play a critical role in providing valuable insights into quantifiable objectives to assess the level of assurance and confidence in the system. As the selection of appropriate metrics can be an arduous process, the goal of this paper is to shed light on the significance of software metrics by examining a range of testing techniques and metrics as well as identifying key areas for improvement. Additionally, through this investigation, readers will gain a deeper understanding of how metrics can help to drive informed decision-making on delivering high-quality software and facilitate continuous improvement in testing practices.Keywords: software testing, software metrics, testing effectiveness, black box testing, random testing, adaptive random testing, combinatorial testing, fuzz testing, equivalence partition, boundary value analysis, white box testing
Procedia PDF Downloads 88341 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems
Authors: Yas Barzegaar, Atrin Barzegar
Abstract:
The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment
Procedia PDF Downloads 101340 Mobile Systems: History, Technology, and Future
Authors: Shivendra Pratap Singh, Rishabh Sharma
Abstract:
The widespread adoption of mobile technology in recent years has revolutionized the way we communicate and access information. The evolution of mobile systems has been rapid and impactful, shaping our lives and changing the way we live and work. However, despite its significant influence, the history and development of mobile technology are not well understood by the general public. This research paper aims to examine the history, technology and future of mobile systems, exploring their evolution from early mobile phones to the latest smartphones and beyond. The study will analyze the technological advancements and innovations that have shaped the mobile industry, from the introduction of mobile internet and multimedia capabilities to the integration of artificial intelligence and 5G networks. Additionally, the paper will also address the challenges and opportunities facing the future of mobile technology, such as privacy concerns, battery life, and the increasing demand for high-speed internet. Finally, the paper will also provide insights into potential future developments and innovations in the mobile sector, such as foldable phones, wearable technology, and the Internet of Things (IoT). The purpose of this research paper is to provide a comprehensive overview of the history, technology, and future of mobile systems, shedding light on their impact on society and the challenges and opportunities that lie ahead.Keywords: mobile technology, artificial intelligence, networking, iot, technological advancements, smartphones
Procedia PDF Downloads 91339 The Fusion of Blockchain and AI in Supply Chain Finance: Scalability in Distributed Systems
Authors: Wu You, Burra Venkata Durga Kumar
Abstract:
This study examines the promising potential of integrating Blockchain and Artificial Intelligence (AI) technologies to scalability in Distributed Systems within the field of supply chain finance. The finance industry is continually confronted with scalability challenges in its Distributed Systems, particularly within the supply chain finance sector, impacting efficiency and security. Blockchain, with its inherent attributes of high scalability and secure distributed ledger system, coupled with AI's strengths in optimizing data processing and decision-making, holds the key to innovating the industry's approach to these issues. This study elucidates the synergistic interplay between Blockchain and AI, detailing how their fusion can drive a significant transformation in the supply chain finance sector's Distributed Systems. It offers specific use-cases within this field to illustrate the practical implications and potential benefits of this technological convergence. The study also discusses future possibilities and current challenges in implementing this groundbreaking approach within the context of supply chain finance. It concludes that the intersection of Blockchain and AI could ignite a new epoch of enhanced efficiency, security, and transparency in the Distributed Systems of supply chain finance within the financial industry.Keywords: blockchain, artificial intelligence (AI), scaled distributed systems, supply chain finance, efficiency and security
Procedia PDF Downloads 90338 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models
Authors: Tejush Badal, Sanaa Alwidian
Abstract:
Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.Keywords: analysis, class diagram, model family, unified modeling language, union model
Procedia PDF Downloads 73337 Development of a Digital Healthcare Intervention to Reduce Digital and Healthcare Inequality in Rural Communities with a Focus on Hypertensive Management
Authors: Festus Adedoyin, Nana Mbeah Otoo, Sofia Meacham
Abstract:
Hypertension is one of the main health issues in Ghana, where prevalence is higher in rural than in urban areas. This is due to the challenges rural areas have in accessing technology and healthcare services for hypertension control. This study's goal is to create a digital healthcare solution to alleviate this inequality. Through an analysis of current technology and problems, using the ring onion methodology, the study determined the needs for the intervention and evaluated healthcare disparities. An online application with teleconsultation capabilities, reminder mechanisms, and clinical decision support is part of the suggested solution. In outlying areas, mobile clinics in containers with the required equipment will be established. Heuristic evaluation and think-aloud sessions were used to assess the prototype's usability and navigational problems. This study highlights the need to develop digital health interventions to help manage hypertension in rural locations and decrease healthcare disparities. To develop and improve digital healthcare solutions for rural areas worldwide and in Ghana, this study might be used as a tool for future research.Keywords: digital health, health inequalities, hypertension management, rural areas
Procedia PDF Downloads 108336 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 105335 The Future of Reduced Instruction Set Computing and Complex Instruction Set Computing and Suggestions for Reduced Instruction Set Computing-V Development
Authors: Can Xiao, Ouanhong Jiang
Abstract:
Based on the two instruction sets of complex instruction set computing (CISC) and reduced instruction set computing (RISC), processors developed in their respective “expertise” fields. This paper will summarize research on the differences in performance and energy efficiency between CISC and RISC and strive to eliminate the influence of peripheral configuration factors. We will discuss whether processor performance is centered around instruction sets or implementation. In addition, the rapidly developing RISC-V poses a challenge to existing models. We will analyze research results, analyze the impact of instruction sets themselves, and finally make suggestions for the development of RISC-V.Keywords: ISA, RISC-V, ARM, X86, power, energy efficiency
Procedia PDF Downloads 88334 Self-Attention Mechanism for Target Hiding Based on Satellite Images
Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai
Abstract:
Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding
Procedia PDF Downloads 134333 Analyzing the Effectiveness of Different Testing Techniques in Ensuring Software Quality
Authors: R. M. P. C. Bandara, M. L. L. Weerasinghe, K. T. C. R. Kumari, A. G. D. R. Hansika, D. I. De Silva, D. M. T. H. Dias
Abstract:
Software testing is an essential process in software development that aims to identify defects and ensure that software is functioning as intended. Various testing techniques are employed to achieve this goal, but the effectiveness of these techniques varies. This research paper analyzes the effectiveness of different testing techniques in ensuring software quality. The paper explores different testing techniques, including manual and automated testing, and evaluates their effectiveness in terms of identifying defects, reducing the number of defects in software, and ensuring that software meets its functional and non-functional requirements. Moreover, the paper will also investigate the impact of factors such as testing time, test coverage, and testing environment on the effectiveness of these techniques. This research aims to provide valuable insights into the effectiveness of different testing techniques, enabling software development teams to make informed decisions about the testing approach that is best suited to their needs. By improving testing techniques, the number of defects in software can be reduced, enhancing the quality of software and ultimately providing better software for users.Keywords: software testing life cycle, software testing techniques, software testing strategies, effectiveness, software quality
Procedia PDF Downloads 83332 Analyzing the Impact of Code Commenting on Software Quality
Authors: Thulya Premathilake, Tharushi Perera, Hansi Thathsarani, Tharushi Nethmini, Dilshan De Silva, Piyumika Samarasekara
Abstract:
One of the most efficient ways to assist developers in grasping the source code is to make use of comments, which can be found throughout the code. When working in fields such as software development, having comments in your code that are of good quality is a fundamental requirement. Tackling software problems while making use of programs that have already been built. It is essential for the intention of the source code to be made crystal apparent in the comments that are added to the code. This assists programmers in better comprehending the programs they are working on and enables them to complete software maintenance jobs in a more timely manner. In spite of the fact that comments and documentation are meant to improve readability and maintainability, the vast majority of programmers place the majority of their focus on the actual code that is being written. This study provides a complete and comprehensive overview of the previous research that has been conducted on the topic of code comments. The study focuses on four main topics, including automated comment production, comment consistency, comment classification, and comment quality rating. One is able to get the knowledge that is more complete for use in following inquiries if they conduct an analysis of the proper approaches that were used in this study issue.Keywords: code commenting, source code, software quality, quality assurance
Procedia PDF Downloads 83