World Academy of Science, Engineering and Technology
[Computer and Systems Engineering]
Online ISSN : 1307-6892
331 Assertion-Driven Test Repair Based on Priority Criteria
Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang
Abstract:
Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent preservation has been proposed, but it does not take into account the association between test repairs and assertions, leading to a large number of irrelevant candidates and decreasing the repair capability. This paper proposes an assertion-driven test repair approach. Furthermore, an intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) of broken test cases, which is more effective than the existing intentpreserved test repair approach, and our intent-oriented priority criteria work well.Keywords: test repair, test intent, software test, test case evolution
Procedia PDF Downloads 128330 Wait-Optimized Scheduler Algorithm for Efficient Process Scheduling in Computer Systems
Authors: Md Habibur Rahman, Jaeho Kim
Abstract:
Efficient process scheduling is a crucial factor in ensuring optimal system performance and resource utilization in computer systems. While various algorithms have been proposed over the years, there are still limitations to their effectiveness. This paper introduces a new Wait-Optimized Scheduler (WOS) algorithm that aims to minimize process waiting time by dividing them into two layers and considering both process time and waiting time. The WOS algorithm is non-preemptive and prioritizes processes with the shortest WOS. In the first layer, each process runs for a predetermined duration, and any unfinished process is subsequently moved to the second layer, resulting in a decrease in response time. Whenever the first layer is free or the number of processes in the second layer is twice that of the first layer, the algorithm sorts all the processes in the second layer based on their remaining time minus waiting time and sends one process to the first layer to run. This ensures that all processes eventually run, optimizing waiting time. To evaluate the performance of the WOS algorithm, we conducted experiments comparing its performance with traditional scheduling algorithms such as First-Come-First-Serve (FCFS) and Shortest-Job-First (SJF). The results showed that the WOS algorithm outperformed the traditional algorithms in reducing the waiting time of processes, particularly in scenarios with a large number of short tasks with long wait times. Our study highlights the effectiveness of the WOS algorithm in improving process scheduling efficiency in computer systems. By reducing process waiting time, the WOS algorithm can improve system performance and resource utilization. The findings of this study provide valuable insights for researchers and practitioners in developing and implementing efficient process scheduling algorithms.Keywords: process scheduling, wait-optimized scheduler, response time, non-preemptive, waiting time, traditional scheduling algorithms, first-come-first-serve, shortest-job-first, system performance, resource utilization
Procedia PDF Downloads 90329 TransDrift: Modeling Word-Embedding Drift Using Transformer
Authors: Nishtha Madaan, Prateek Chaudhury, Nishant Kumar, Srikanta Bedathur
Abstract:
In modern NLP applications, word embeddings are a crucial backbone that can be readily shared across a number of tasks. However, as the text distributions change and word semantics evolve over time, the downstream applications using the embeddings can suffer if the word representations do not conform to the data drift. Thus, maintaining word embeddings to be consistent with the underlying data distribution is a key problem. In this work, we tackle this problem and propose TransDrift, a transformer-based prediction model for word embeddings. Leveraging the flexibility of the transformer, our model accurately learns the dynamics of the embedding drift and predicts future embedding. In experiments, we compare with existing methods and show that our model makes significantly more accurate predictions of the word embedding than the baselines. Crucially, by applying the predicted embeddings as a backbone for downstream classification tasks, we show that our embeddings lead to superior performance compared to the previous methods.Keywords: NLP applications, transformers, Word2vec, drift, word embeddings
Procedia PDF Downloads 88328 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 105327 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 105326 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 88325 Domain-Specific Languages Evaluation: A Literature Review and Experience Report
Authors: Sofia Meacham
Abstract:
In this abstract paper, the Domain-Specific Languages (DSL) evaluation will be presented based on existing literature and years of experience developing DSLs for several domains. The domains we worked on ranged from AI, business applications, and finances/accounting to health. In general, DSLs have been utilised in many domains to provide tailored and efficient solutions to address specific problems. Although they are a reputable method among highly technical circles and have also been used by non-technical experts with success, according to our knowledge, there isn’t a commonly accepted method for evaluating them. There are some methods that define criteria that are adaptations from the general software engineering quality criteria. Other literature focuses on the DSL usability aspect of evaluation and applies methods such as Human-Computer Interaction (HCI) and goal modeling. All these approaches are either hard to introduce, such as the goal modeling, or seem to ignore the domain-specific focus of the DSLs. From our experience, the DSLs have domain-specificity in their core, and consequently, the methods to evaluate them should also include domain-specific criteria in their core. The domain-specific criteria would require synergy between the domain experts and the DSL developers in the same way that DSLs cannot be developed without domain-experts involvement. Methods from agile and other software engineering practices, such as co-creation workshops, should be further emphasised and explored to facilitate this direction. Concluding, our latest experience and plans for DSLs evaluation will be presented and open for discussion.Keywords: domain-specific languages, DSL evaluation, DSL usability, DSL quality metrics
Procedia PDF Downloads 102324 Public Policy as a Component of Entrepreneurship Ecosystems: Challenges of Implementation
Authors: José Batista de Souza Neto
Abstract:
This research project has as its theme the implementation of public policies to support micro and small businesses (MSEs). The research problem defined was how public policies for access to markets that drive the entrepreneurial ecosystem of MSEs are implemented. The general objective of this research is to understand the process of implementing a public policy to support the entrepreneurial ecosystem of MSEs by the Support Service for Micro and Small Enterprises of the State of São Paulo (SEBRAESP). Public policies are constituent elements of entrepreneurship ecosystems that influence the creation and development of ventures from the action of the entrepreneur. At the end of the research, it is expected to achieve the results for the following specific objectives: (a) understand how the entrepreneurial ecosystem of MSEs is constituted; (b) understand how market access public policies for MSEs are designed and implemented; (c) understand SEBRAE's role in the entrepreneurship ecosystem; and (d) offer an action plan and monitor its execution up to march, 2023. The field research will be conducted based on Action Research, with a qualitative and longitudinal approach to the data. Data collection will be based on narratives produced since 2019 when the decision to implement Comércio Brasil program, a public policy focused on generating market access for 4280 MSEs yearly, was made. The narratives will be analyzed by the method of document analysis and narrative analysis. It is expected that the research will consolidate the relevance of public policies to market access for MSEs and the role of SEBRAE as a protagonist in the implementation of these public policies in the entrepreneurship ecosystem will be demonstrated. Action research is recognized as an intervention method, it is expected that this research will corroborate its role in supporting management processes.Keywords: entrepreneurship, entrepreneurship ecosystem, public policies, SEBRAE, action research
Procedia PDF Downloads 185323 Mobile App Architecture in 2023: Build Your Own Mobile App
Authors: Mounir Filali
Abstract:
Companies use many innovative ways to reach their customers to stay ahead of the competition. Along with the growing demand for innovative business solutions is the demand for new technology. The most noticeable area of demand for business innovations is the mobile application industry. Recently, companies have recognized the growing need to integrate proprietary mobile applications into their suite of services; Companies have realized that developing mobile apps gives them a competitive edge. As a result, many have begun to rapidly develop mobile apps to stay ahead of the competition. Mobile application development helps companies meet the needs of their customers. Mobile apps also help businesses to take advantage of every potential opportunity to generate leads that convert into sales. Mobile app download growth statistics with the recent rise in demand for business-related mobile apps, there has been a similar rise in the range of mobile app solutions being offered. Today, companies can use the traditional route of the software development team to build their own mobile applications. However, there are also many platform-ready "low-code and no-code" mobile apps available to choose from. These mobile app development options have more streamlined business processes. This helps them be more responsive to their customers without having to be coding experts. Companies must have a basic understanding of mobile app architecture to attract and maintain the interest of mobile app users. Mobile application architecture refers to the buildings or structural systems and design elements that make up a mobile application. It also includes the technologies, processes, and components used during application development. The underlying foundation of all applications consists of all elements of the mobile application architecture, developing a good mobile app architecture requires proper planning and strategic design. The technology framework or platform on the back end and user-facing side of a mobile application is part of the mobile architecture of the application. In-application development Software programmers loosely refer to this set of mobile architecture systems and processes as the "technology stack".Keywords: mobile applications, development, architecture, technology
Procedia PDF Downloads 100322 Challenges of Implementing Zero Trust Security Based on NIST SP 800-207
Authors: Mazhar Hamayun
Abstract:
Organizations need to take a holistic approach to their Zero Trust strategic and tactical security needs. This includes using a framework-agnostic model that will ensure all enterprise resources are being accessed securely, regardless of their location. Such can be achieved through the implementation of a security posture, monitoring the posture, and adjusting the posture through the Identify, Detect, Protect, Respond, and Recover Methods, The target audience of this document includes those involved in the management and operational functions of risk, information security, and information technology. This audience consists of the chief information security officer, chief information officer, chief technology officer, and those leading digital transformation initiatives where Zero Trust methods can help protect an organization’s data assets.Keywords: ZTNA, zerotrust architecture, microsegmentation, NIST SP 800-207
Procedia PDF Downloads 85321 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 125320 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks
Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann
Abstract:
This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem
Procedia PDF Downloads 128319 Ethical Concerns in the Internet of Things and Smart Devices: Case Studies and Analysis
Authors: Mitchell Browe, Oriehi Destiny Anyaiwe, Zahraddeen Gwarzo
Abstract:
The Internet of Things (IoT) is a major evolution of technology and of the internet, which has the power to revolutionize the way people live. IoT has the power to change the way people interact with each other and with their homes; It has the ability to give people new ways to interact with and monitor their health; It can alter socioeconomic landscapes by providing new and efficient methods of resource management, saving time and money for both individuals and society as a whole; It even has the potential to save lives through autonomous vehicle technology and smart security measures. Unfortunately, nearly every revolution bears challenges which must be addressed to minimize harm by the new technology upon its adopters. IoT represents an internet technology revolution which has the potential to risk privacy, safety, and security of its users, should devices be developed, implemented, or utilized improperly. This article examines past and current examples of these ethical faults in an attempt to highlight the importance of consumer awareness of potential dangers of these technologies in making informed purchasing and utilization decisions, as well as to reveal how deficiencies and limitations of IoT devices should be better addressed by both companies and by regulatory bodies. Aspects such as consumer trust, corporate transparency, and misuse of individual data are all factors in the implementation of proper ethical boundaries in the IoT.Keywords: IoT, ethical concerns, privacy, safety, security, smart devices
Procedia PDF Downloads 84318 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 84317 Recommender System Based on Mining Graph Databases for Data-Intensive Applications
Authors: Mostafa Gamal, Hoda K. Mohamed, Islam El-Maddah, Ali Hamdi
Abstract:
In recent years, many digital documents on the web have been created due to the rapid growth of ’social applications’ communities or ’Data-intensive applications’. The evolution of online-based multimedia data poses new challenges in storing and querying large amounts of data for online recommender systems. Graph data models have been shown to be more efficient than relational data models for processing complex data. This paper will explain the key differences between graph and relational databases, their strengths and weaknesses, and why using graph databases is the best technology for building a realtime recommendation system. Also, The paper will discuss several similarity metrics algorithms that can be used to compute a similarity score of pairs of nodes based on their neighbourhoods or their properties. Finally, the paper will discover how NLP strategies offer the premise to improve the accuracy and coverage of realtime recommendations by extracting the information from the stored unstructured knowledge, which makes up the bulk of the world’s data to enrich the graph database with this information. As the size and number of data items are increasing rapidly, the proposed system should meet current and future needs.Keywords: graph databases, NLP, recommendation systems, similarity metrics
Procedia PDF Downloads 103316 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 95315 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 120314 A Semantic Registry to Support Brazilian Aeronautical Web Services Operations
Authors: Luís Antonio de Almeida Rodriguez, José Maria Parente de Oliveira, Ednelson Oliveira
Abstract:
In the last two decades, the world’s aviation authorities have made several attempts to create consensus about a global and accepted approach for applying semantics to web services registry descriptions. This problem has led communities to face a fat and disorganized infrastructure to describe aeronautical web services. It is usual for developers to implement ad-hoc connections among consumers and providers and manually create non-standardized service compositions, which need some particular approach to compose and semantically discover a desired web service. Current practices are not precise and tend to focus on lightweight specifications of some parts of the OWL-S and embed them into syntactic descriptions (SOAP artifacts and OWL language). It is necessary to have the ability to manage the use of both technologies. This paper presents an implementation of the ontology OWL-S that describes a Brazilian Aeronautical Web Service Registry, which makes it able to publish, advertise, make multi-criteria semantic discovery aligned with the ideas of the System Wide Information Management (SWIM) Program, and invoke web services within the Air Traffic Management context. The proposal’s best finding is a generic approach to describe semantic web services. The paper also presents a set of functional requirements to guide the ontology development and to compare them to the results to validate the implementation of the OWL-S Ontology.Keywords: aeronautical web services, OWL-S, semantic web services discovery, ontologies
Procedia PDF Downloads 85313 Use of Computer and Machine Learning in Facial Recognition
Authors: Neha Singh, Ananya Arora
Abstract:
Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements.Keywords: facial action, action units, coding, machine learning
Procedia PDF Downloads 105312 A Thorough Analysis of the Literature on the Airport Service Quality and Patron Satisfaction
Authors: Mohammed Saad Alanazi
Abstract:
Satisfaction of travelers with services provided in the airports is a sign of competitiveness and the corporate image of the airport. This study conducted a systematic literature review of recent studies published after 2017 regarding the factors that positively influence travelers’ satisfaction and encourage them to report positive reviews online. This study found variations among the studies found. They used several research methodologies, and datasets and focused on different airports, yet, they commonly categorized airport services into seven categories that should receive high intention because their qualities were found increasing review rate and positivity. It was found that studies targeting travelers’ satisfaction and intention of revisiting tended to use primary sources of data (survey); meanwhile, studies concerned positivity and negativity of comments towards airport services often used online reviews provided by travelers.Keywords: business Intelligence, airport service quality, passenger satisfaction, thorough analysis
Procedia PDF Downloads 78311 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs
Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye
Abstract:
This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label
Procedia PDF Downloads 126310 Next-Viz: A Literature Review and Web-Based Visualization Tool Proposal
Authors: Railly Hugo, Igor Aguilar-Alonso
Abstract:
Software visualization is a powerful tool for understanding complex software systems. However, current visualization tools often lack features or are difficult to use, limiting their effectiveness. In this paper, we present next-viz, a proposed web-based visualization tool that addresses these challenges. We provide a literature review of existing software visualization techniques and tools and describe the architecture of next-viz in detail. Our proposed tool incorporates state-of-the-art visualization techniques and is designed to be user-friendly and intuitive. We believe next-viz has the potential to advance the field of software visualization significantly.Keywords: software visualization, literature review, tool proposal, next-viz, web-based, architecture, visualization techniques, user-friendly, intuitive
Procedia PDF Downloads 81309 Proposal for a Mobile Application with Augmented Reality to Improve School Interest
Authors: Mamani Acurio Alex, Aguilar Alonso Igor
Abstract:
The lack of interest and the lack of motivation are related. The lack of both in school generates serious problems such as school dropout or a low level of learning. Augmented reality has been very useful in different areas, and in this research, it was implemented in the area of education. Information necessary for the correct development of this mobile application with augmented reality was searched using six different research repositories. It was concluded that the application must be immersive, attractive, and fun for students, and the necessary technologies for its construction were defined.Keywords: augmented reality, Vuforia, school interest, learning
Procedia PDF Downloads 86308 Extended Boolean Petri Nets Generating N-Ary Trees
Authors: Riddhi Jangid, Gajendra Pratap Singh
Abstract:
Petri nets, a mathematical tool, is used for modeling in different areas of computer sciences, biological networks, chemical systems and many other disciplines. A Petri net model of a given system is created by the graphical representation that describes the properties and behavior of the system. While looking for the behavior of any system, 1-safe Petri nets are of particular interest to many in the application part. Boolean Petri nets correspond to those class in 1- safe Petri nets that generate all the binary n-vectors in their reachability analysis. We study the class by changing different parameters like the token counts in the places and how the structure of the tree changes in the reachability analysis. We discuss here an extended class of Boolean Petri nets that generates n-ary trees in their reachability-based analysis.Keywords: marking vector, n-vector, petri nets, reachability
Procedia PDF Downloads 81307 Patent on Brian: Brain Waves Stimulation
Authors: Jalil Qoulizadeh, Hasan Sadeghi
Abstract:
Brain waves are electrical wave patterns that are produced in the human brain. Knowing these waves and activating them can have a positive effect on brain function and ultimately create an ideal life. The brain has the ability to produce waves from 0.1 to above 65 Hz. (The Beta One device produces exactly these waves) This is because it is said that the waves produced by the Beta One device exactly match the waves produced by the brain. The function and method of this device is based on the magnetic stimulation of the brain. The technology used in the design and producƟon of this device works in a way to strengthen and improve the frequencies of brain waves with a pre-defined algorithm according to the type of requested function, so that the person can access the expected functions in life activities. to perform better. The effect of this field on neurons and their stimulation: In order to evaluate the effect of this field created by the device, on the neurons, the main tests are by conducting electroencephalography before and after stimulation and comparing these two baselines by qEEG or quantitative electroencephalography method using paired t-test in 39 subjects. It confirms the significant effect of this field on the change of electrical activity recorded after 30 minutes of stimulation in all subjects. The Beta One device is able to induce the appropriate pattern of the expected functions in a soft and effective way to the brain in a healthy and effective way (exactly in accordance with the harmony of brain waves), the process of brain activities first to a normal state and then to a powerful one. Production of inexpensive neuroscience equipment (compared to existing rTMS equipment) Magnetic brain stimulation for clinics - homes - factories and companies - professional sports clubs.Keywords: stimulation, brain, waves, betaOne
Procedia PDF Downloads 80306 SEM Image Classification Using CNN Architectures
Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope
Procedia PDF Downloads 124305 3D Vision Transformer for Cervical Spine Fracture Detection and Classification
Authors: Obulesh Avuku, Satwik Sunnam, Sri Charan Mohan Janthuka, Keerthi Yalamaddi
Abstract:
In the United States alone, there are over 1.5 million spine fractures per year, resulting in about 17,730 spinal cord injuries. The cervical spine is where fractures in the spine most frequently occur. The prevalence of spinal fractures in the elderly has increased, and in this population, fractures may be harder to see on imaging because of coexisting degenerative illness and osteoporosis. Nowadays, computed tomography (CT) is almost completely used instead of radiography for the imaging diagnosis of adult spine fractures (x-rays). To stop neurologic degeneration and paralysis following trauma, it is vital to trace any vertebral fractures at the earliest. Many approaches have been proposed for the classification of the cervical spine [2d models]. We are here in this paper trying to break the bounds and use the vision transformers, a State-Of-The-Art- Model in image classification, by making minimal changes possible to the architecture of ViT and making it 3D-enabled architecture and this is evaluated using a weighted multi-label logarithmic loss. We have taken this problem statement from a previously held Kaggle competition, i.e., RSNA 2022 Cervical Spine Fracture Detection.Keywords: cervical spine, spinal fractures, osteoporosis, computed tomography, 2d-models, ViT, multi-label logarithmic loss, Kaggle, public score, private score
Procedia PDF Downloads 114304 Classification of Crisp Petri Nets
Authors: Riddhi Jangid, Gajendra Pratap Singh
Abstract:
Petri nets, a formalized modeling language that was introduced back around 50-60 years, have been widely used for modeling discrete event dynamic systems and simulating their behavior. Reachability analysis of Petri nets gives many insights into a modeled system. This idea leads us to study the reachability technique and use it in the reachability problem in the state space of reachable markings. With the same concept, Crisp Boolean Petri nets were defined in which the marking vectors that are boolean are distinct in the reachability analysis of the nets. We generalize the concept and define ‘Crisp’ Petri nets that generate the marking vectors exactly once in their reachability-based analysis, not necessarily Boolean.Keywords: marking vector, n-vector, Petri nets, reachability
Procedia PDF Downloads 81303 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).Keywords: activation function, universal approximation function, neural networks, convergence
Procedia PDF Downloads 157302 Review on Implementation of Artificial Intelligence and Machine Learning for Controlling Traffic and Avoiding Accidents
Authors: Neha Singh, Shristi Singh
Abstract:
Accidents involving motor vehicles are more likely to cause serious injuries and fatalities. It also has a host of other perpetual issues, such as the regular loss of life and goods in accidents. To solve these issues, appropriate measures must be implemented, such as establishing an autonomous incident detection system that makes use of machine learning and artificial intelligence. In order to reduce traffic accidents, this article examines the overview of artificial intelligence and machine learning in autonomous event detection systems. The paper explores the major issues, prospective solutions, and use of artificial intelligence and machine learning in road transportation systems for minimising traffic accidents. There is a lot of discussion on additional, fresh, and developing approaches that less frequent accidents in the transportation industry. The study structured the following subtopics specifically: traffic management using machine learning and artificial intelligence and an incident detector with these two technologies. The internet of vehicles and vehicle ad hoc networks, as well as the use of wireless communication technologies like 5G wireless networks and the use of machine learning and artificial intelligence for the planning of road transportation systems, are elaborated. In addition, safety is the primary concern of road transportation. Route optimization, cargo volume forecasting, predictive fleet maintenance, real-time vehicle tracking, and traffic management, according to the review's key conclusions, are essential for ensuring the safety of road transportation networks. In addition to highlighting research trends, unanswered problems, and key research conclusions, the study also discusses the difficulties in applying artificial intelligence to road transport systems. Planning and managing the road transportation system might use the work as a resource.Keywords: artificial intelligence, machine learning, incident detector, road transport systems, traffic management, automatic incident detection, deep learning
Procedia PDF Downloads 111