Publications | Computer and Systems Engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 478

World Academy of Science, Engineering and Technology

[Computer and Systems Engineering]

Online ISSN : 1307-6892

478 A Neutral Set Approach for Applying TOPSIS in Maintenance Strategy Selection

Authors: C. Ardil

Abstract:

This paper introduces the concept of neutral sets (NSs) and explores various operations on NSs, along with their associated properties. The foundation of the Neutral Set framework lies in ontological neutrality and the principles of logic, including the Law of Non-Contradiction. By encompassing components for possibility, indeterminacy, and necessity, the NS framework provides a flexible representation of truth, uncertainty, and necessity, accommodating diverse ontological perspectives without presupposing specific existential commitments. The inclusion of Possibility acknowledges the spectrum of potential states or propositions, promoting neutrality by accommodating various viewpoints. Indeterminacy reflects the inherent uncertainty in understanding reality, refraining from making definitive ontological commitments in uncertain situations. Necessity captures propositions that must hold true under all circumstances, aligning with the principle of logical consistency and implicitly supporting the Law of Non-Contradiction. Subsequently, a neutral set-TOPSIS approach is applied in the maintenance strategy selection problem, demonstrating the practical applicability of the NS framework. The paper further explores uncertainty relations and presents the fundamental preliminaries of NS theory, emphasizing its role in fostering ontological neutrality and logical coherence in reasoning.

Keywords: Uncertainty sets, neutral sets, maintenance strategy selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, distance function, multiple attribute, decision making, selection method, uncertainty, TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10
477 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization

Authors: Hebberly Ahatlan

Abstract:

The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.

Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13
476 Fighter Aircraft Selection Using Neutrosophic Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

Fuzzy set and intuitionistic fuzzy set are dealing with the imprecision and uncertainty inherent in a complex decision problem. However, sometimes these theories are not sufficient to model indeterminate and inconsistent information encountered in real-life problems. To overcome this insufficiency, the neutrosophic set, which is useful in practical applications, is proposed, triangular neutrosophic numbers and trapezoidal neutrosophic numbers are examined, their definitions and applications are discussed. In this study, a decision making algorithm is developed using neutrosophic set processes and an application is given in fighter aircraft selection as an example of a decision making problem. The estimation of the fighter aircraft selection with the neutrosophic multiple criteria decision analysis method is examined.  

Keywords: neutrosophic set, multiple criteria decision making analysis, fighter aircraft selection, MCDMA, neutrosophic numbers

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859
475 Assertion-Driven Test Repair Based on Priority Criteria

Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang

Abstract:

Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent-preservation has been proposed, it does not take into account the association between test repairs and assertions, leading a large number of irrelevant candidates and decreasing the repair capability. This paper proposes a assertion-driven test repair approach. Furthermore, a intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) broken test cases, which are more effective than the existing intent-preserved test repair approach, and our intent-oriented priority criteria work well.

Keywords: Test repair, test intent, software test, test case evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52
474 Factors Affecting Test Automation Stability and Their Solutions

Authors: Nagmani Lnu

Abstract:

Test automation is a vital requirement of any organization to release products faster to their customers. In most cases, an organization has an approach to developing automation but struggles to maintain it. It results in an increased number of flaky tests, reducing return on investments and stakeholders’ confidence. Challenges grow in multiple folds when automation is for User Interface (UI) behaviors. This paper describes the approaches taken to identify the root cause of automation instability in an extensive payments application and the best practices to address that using processes, tools, and technologies, resulting in a 75% reduction of effort.

Keywords: Automation stability, test stability, flaky test, test quality, test automation quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71
473 UPPAAL-Based Design and Analysis of Intelligent Parking System

Authors: Abobaker M. Q. Farhan, Olof M. A. Saif

Abstract:

The demand for parking spaces in urban areas, particularly in developing countries, has led to a significant issue in the absence of sufficient parking spaces in crowded areas, which results in daily traffic congestion as drivers search for parking. This not only affects the appearance of the city but also has indirect impacts on the economy, society, and environment. In response to these challenges, researchers from various countries have sought technical and intelligent solutions to mitigate the problem through the development of smart parking systems. This paper aims to analyze and design three models of parking lots, with a focus on parking time and security. The study used computer software and Uppaal tools to simulate the models and determine the best among them. The results and suggestions provided in the paper aim to reduce the parking problems and improve the overall efficiency and safety of the parking process. The conclusion of the study highlights the importance of utilizing advanced technology to address the pressing issue of insufficient parking spaces in urban areas.

Keywords: Preliminaries, system requirements, timed automata, uppaal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56
472 Advanced Convolutional Neural Network Paradigms-Comparison of VGG16 with Resnet50 in Crime Detection

Authors: Taiwo. M. Akinmuyisitan, John Cosmas

Abstract:

This paper practically demonstrates the theories and concepts of an Advanced Convolutional Neural Network in the design and development of a scalable artificial intelligence model for the detection of criminal masterminds. The technique uses machine vision algorithms to compute the facial characteristics of suspects and classify actors as criminal or non-criminal faces. The paper proceeds further to compare the results of the error accuracy of two popular custom convolutional pre-trained networks, VGG16 and Resnet50. The result shows that VGG16 is probably more efficient than ResNet50 for the dataset we used.

Keywords: Artificial intelligence, convolutional neural networks, Resnet50, VGG16.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106
471 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra, Abdus Sobur

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of artificial intelligence (AI), specifically deep learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images, representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our approach presents a hybrid model, amalgamating the strengths of two renowned convolutional neural networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: Artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
470 A Semantic Registry to Support Brazilian Aeronautical Web Services Operations

Authors: Luís Antonio de Almeida Rodriguez, José Maria Parente de Oliveira, Ednelson Oliveira

Abstract:

In the last two decades, the world’s aviation authorities have made several attempts to create consensus about a global and accepted approach for applying semantics to web services registry descriptions. This problem has led communities to face a fat and disorganized infrastructure to describe aeronautical web services. It is usual for developers to implement ad-hoc connections among consumers and providers and manually create non-standardized service compositions, which need some particular approach to compose and semantically discover a desired web service. Current practices are not precise and tend to focus on lightweight specifications of some parts of the OWL-S and embed them into syntactic descriptions (SOAP artifacts and OWL language). It is necessary to have the ability to manage the use of both technologies. This paper presents an implementation of the ontology OWL-S that describes a Brazilian Aeronautical Web Service Registry, which makes it able to publish, advertise, make multi-criteria semantic discovery aligned with the ideas of the System Wide Information Management (SWIM) Program, and invoke web services within the Air Traffic Management context. The proposal’s best finding is a generic approach to describe semantic web services. The paper also presents a set of functional requirements to guide the ontology development and to compare them to the results to validate the implementation of the OWL-S Ontology.

Keywords: Aeronautical Web Services, OWL-S, Semantic Web Services Discovery, Ontologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 76
469 Minimizing Mutant Sets by Equivalence and Subsumption

Authors: Samia Alblwi, Amani Ayad

Abstract:

Mutation testing is the art of generating syntactic variations of a base program and checking whether a candidate test suite can identify all the mutants that are not semantically equivalent to the base; this technique can be used to assess the quality of test suite. One of the main obstacles to the widespread use of mutation testing is cost, as even small programs (a few dozen lines of code) can give rise to a large number of mutants (up to hundreds); this has created an incentive to seek to reduce the number of mutants while preserving their collective effectiveness. Two criteria have been used to reduce the size of mutant sets: equivalence, which aims to partition the set of mutants into equivalence classes modulo semantic equivalence, and selecting one representative per class; and, subsumption, which aims to define a partial ordering among mutants that ranks mutants by effectiveness and seeks to select maximal elements in this ordering. In this paper, we analyze these two policies using analytical and empirical criteria.

Keywords: Mutation testing, mutant sets, mutant equivalence, mutant subsumption, mutant set minimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99
468 Comparative Usability Study of the Websites of Top Universities in Three Continents: A Case Study of the University of Cape Town, Oxford University, and Harvard University

Authors: Stephen Akuma, Racheal Aluma, Abraham Undu

Abstract:

Academic websites play an important role in promoting education for all. They allow universities to provide users with digital academic services to save time and resources. A university website is not only a cost-effective and timely way to communicate with a variety of stakeholders, such as students, faculty, and visitors, but it is also a vehicle for the university to shape its image. The quality of a website is a major factor that universities consider in cyberspace. Potential students can easily apply to universities where the website provides useful and clear information. This has made the usability of websites an important area in meeting the needs and expectations of website users. In this paper, a comparative usability study of the University of Cape Town, Oxford University, and Harvard University academic websites (http://www.uct.ac.za/, https://www.ox.ac.uk/, and https://www.harvard.edu/) was carried out. The proactive user feedback technique was adopted for the comparative usability assessment of the aforementioned universities. The method was used by the researchers to collect and log records from the participants in real time. The result shows that the average dwell time on the websites of Harvard University, Oxford University, and Cape Town University in seconds for the three tasks are 51.58, 33.28, and 54.82 respectively. The System Usability Scale (SUS) scores for Harvard, Oxford, and the University of Cape Town are 49.81, 69.43, and 54.14 respectively. The result of the Analysis of Variance on the dwell time data shows a significant difference (p = .009) on the three websites. Our findings show that Oxford University has the most suitable website in terms of usability factors and other metrics than the other websites investigated. Practical implications are highlighted, and recommendations for improved website usability are suggested.

Keywords: Usability factors, user feedback, university websites, University of Cape Town, Harvard University, Oxford University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97
467 SEM Image Classification Using CNN Architectures

Authors: G. Türkmen, Ö. Tekin, K. Kurtuluş, Y. Y. Yurtseven, M. Baran

Abstract:

A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.

Keywords: Convolutional Neural Networks, deep learning, image classification, scanning electron microscope.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 121
466 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks

Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha

Abstract:

Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs – Sigmoid, ReLU, and Tanh – have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment on multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLU-ReLU) combination. Our results show that on using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).

Keywords: Activation Function, Universal Approximation function, Neural Networks, convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88
465 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: Time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125
464 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101
463 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection

Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi

Abstract:

It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, hybrid, filter-wrapper, phishing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112
462 Intervention Targeting in Environmental Networks

Authors: Chukwudi Henry Dike

Abstract:

We explore targeted subsidy in a set-up for which manufacturing firms in a waste-spillover network make endogenous production decisions. Here, games of substitution in digraphs arises where waste-producing firms internalise negative externality in a quadratic fashion. We find neutrality in intervention policies that create or reduce spillover links. Most importantly, we observe centrality distinction in asymmetric digraphs so that the dependence and power of each firm play unique roles. Here we see that in targeted subsidy, a firm with greater centrality guarantees optimal welfare improvement. This centrality however measures the weakness of each firm’s Nash-based link to other neighbourhood firms i.e., lower negative externality.

Keywords: Centrality, externality, key-player, Nash-Equilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146
461 Deep Reinforcement Learning for Optimal Decision-making in Supply Chains

Authors: Nitin Singh, Meng Ling, Talha Ahmed, Tianxia Zhao, Reinier van de Pol

Abstract:

We propose the use of Reinforcement Learning (RL) as a viable alternative for optimizing supply chain management, particularly in scenarios with stochasticity in product demands. RL’s adaptability to changing conditions and its demonstrated success in diverse fields of sequential decision-making make it a promising candidate for addressing supply chain problems. We investigate the impact of demand fluctuations in a multi-product supply chain system and develop RL agents with learned generalizable policies. We provide experimentation details for training RL agents and a statistical analysis of the results. We study generalization ability of RL agents for different demand uncertainty scenarios and observe superior performance compared to the agents trained with fixed demand curves. The proposed methodology has the potential to lead to cost reduction and increased profit for companies dealing with frequent inventory movement between supply and demand nodes.

Keywords: Inventory Management, Reinforcement Learning, Supply Chain Optimization, Uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 258
460 Overview of Development of a Digital Platform for Building Critical Infrastructure Protection Systems in Smart Industries

Authors: Bruno Vilić Belina, Ivan Župan

Abstract:

Smart industry concepts and digital transformation are very popular in many industries. They develop their own digital platforms, which have an important role in innovations and transactions. The main idea of smart industry digital platforms is central data collection, industrial data integration and data usage for smart applications and services. This paper presents the development of a digital platform for building critical infrastructure protection systems in smart industries. Different service contraction modalities in Service Level Agreements (SLAs), Customer Relationship Management (CRM) relations, trends and changes in business architectures (especially process business architecture) for the purpose of developing infrastructural production and distribution networks, information infrastructure meta-models and generic processes by critical infrastructure owner demanded by critical infrastructure law, satisfying cybersecurity requirements and taking into account hybrid threats are researched.

Keywords: Cybersecurity, critical infrastructure, smart industries, digital platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153
459 Offset Dependent Uniform Delay Mathematical Optimization Model for Signalized Traffic Network Using Differential Evolution Algorithm

Authors: Tahseen Al-Shaikhli, Halim Ceylan, Jonathan Weaver, Osman Nuri Çelik, Onur Gungor Sahin

Abstract:

A concept of uniform delay offset dependent mathematical optimization problem is derived as the main objective for this study using a differential evolution algorithm. Furthermore, the objectives are to control the coordination problem which mainly depends on offset selection, and to estimate the uniform delay based on the offset choice at each signalized intersection. The assumption is the periodic sinusoidal function for arrival and departure patterns. The cycle time is optimized at the entry links and the optimized value is used in the non-entry links as a common cycle time. The offset optimization algorithm is used to calculate the uniform delay at each link. The results are illustrated by using a case study and compared with the canonical uniform delay model derived by Webster and the highway capacity manual’s model. The findings show that the derived model minimizes the total uniform delay to almost half compared to conventional models; the mathematical objective function is robust; the algorithm convergence time is fast.

Keywords: Area traffic control, differential evolution, offset variable, sinusoidal periodic function, traffic flow, uniform delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268
458 Organizational Data Security in Perspective of Ownership of Mobile Devices Used by Employees for Works

Authors: B. Ferdousi, J. Bari

Abstract:

With advancement of mobile computing, employees are increasingly doing their job-related works using personally owned mobile devices or organization owned devices. The Bring Your Own Device (BYOD) model allows employees to use their own mobile devices for job-related works, while Corporate Owned, Personally Enabled (COPE) model allows both organizations and employees to install applications onto organization-owned mobile devices used for job-related works. While there are many benefits of using mobile computing for job-related works, there are also serious concerns of different levels of threats to the organizational data security. Consequently, it is crucial to know the level of threat to the organizational data security in the BOYD and COPE models. It is also important to ensure that employees comply with the organizational data security policy. This paper discusses the organizational data security issues in perspective of ownership of mobile devices used by employees, especially in BYOD and COPE models. It appears that while the BYOD model has many benefits, there are relatively more data security risks in this model than in the COPE model. The findings also showed that in both BYOD and COPE environments, a more practical approach towards achieving secure mobile computing in organizational setting is through the development of comprehensive cybersecurity policies balancing employees’ need for convenience with organizational data security. The study helps to figure out the compliance and the risks of security breach in BYOD and COPE models.

Keywords: Data security, mobile computing, BYOD, COPE, cybersecurity policy, cybersecurity compliance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267
457 Electroencephalography-Based Intention Recognition and Consensus Assessment during Emergency Response

Authors: Siyao Zhu, Yifang Xu

Abstract:

After natural and man-made disasters, robots can bypass the danger, expedite the search, and acquire unprecedented situational awareness to design rescue plans. Brain-computer interface is a promising option to overcome the limitations of tedious manual control and operation of robots in the urgent search-and-rescue tasks. This study aims to test the feasibility of using electroencephalography (EEG) signals to decode human intentions and detect the level of consensus on robot-provided information. EEG signals were classified using machine-learning and deep-learning methods to discriminate search intentions and agreement perceptions. The results show that the average classification accuracy for intention recognition and consensus assessment is 67% and 72%, respectively, proving the potential of incorporating recognizable users’ bioelectrical responses into advanced robot-assisted systems for emergency response.

Keywords: Consensus assessment, electroencephalogram, EEG, emergency response, human-robot collaboration, intention recognition, search and rescue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260
456 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran

Authors: Hamed Zolfaghari, Mojtaba Kord

Abstract:

Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that Microsoft project does not store the data in database, so the data cannot automatically be imported from Microsoft Project into Microsoft Excel. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI (Business Intelligence) for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, Human Resource (HR) reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.

Keywords: Primavera P6, SQL, Power BI, Earned Value Management, Integration Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 333
455 A Survey on Performance Tools for OpenMP

Authors: Mubarak S. Mohsen, Rosni Abdullah, Yong M. Teo

Abstract:

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Keywords: Parallel performance tools, OpenMP, multi-core.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857
454 Unmanned Combat Aircraft Selection using Fuzzy Proximity Measure Method in Multiple Criteria Group Decision Making

Authors: C. Ardil

Abstract:

The decision to select an unmanned combat aircraft is complicated since several options and conflicting criteria must be considered at simultaneously. When making multiple criteria decision, it is important to consider the selected evaluation criteria, including priceability, payloadability, stealthability, speedability , and survivability. The fundamental goal of the study is to select the best unmanned combat aircraft by taking these evaluation criteria into account. The optimal aircraft was chosen using the fuzzy proximity measure method, which enables decision-makers to designate preferences as standard fuzzy set numbers during the multiple criteria decision-making process. To assess the applicability of the proposed approach, a numerical example is provided. Finally, by comparing determined unmanned combat aircraft, the proposed method produced a successful application, and the best aircraft was selected.

Keywords: standard fuzzy sets (SFS), unmanned combat aircraft selection, multiple criteria decision making (MCDM), multiple criteria group decision making (MCGDM), proximity measure method (PMM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339
453 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 490
452 Using Wiki for Enhancing the Knowledge Transfer to Newcomers: An Experience Report

Authors: H. O. Barbosa, A. C. R. da Silva, C. M. de Almeida, E. M. dos Santos, F. O. de Sousa, F. B. da S. Souza, F. B. da S. Souza, F. de O. Lima, L. H. Albuquerque, R. F. do Valle Cunha

Abstract:

Software development is intrinsic human-based knowledge-intensive. Due to globalization, software development has become a complex challenge and we usually face barriers related to knowledge management, team building, costly testing processes, especially in distributed settings. In this paper, we present the use of experimental studies to improve our knowledge management process using the Wiki system. According to the results, it was possible to identify learning preferences from our software projects leader team, organize and improve the learning experience of our Wiki, and facilitate collaboration by newcomers to improve Wiki with new contents available in the Wiki.

Keywords: Mobile products, knowledge management process, Wiki system, Global Software Development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
451 Transfer Knowledge from Multiple Source Problems to a Target Problem in Genetic Algorithm

Authors: Tami Alghamdi, Terence Soule

Abstract:

To study how to transfer knowledge from multiple source problems to the target problem, we modeled the Transfer Learning (TL) process using Genetic Algorithms as the model solver. TL is the process that aims to transfer learned data from one problem to another problem. The TL process aims to help Machine Learning (ML) algorithms find a solution to the problems. The Genetic Algorithms (GA) give researchers access to information that we have about how the old problem is solved. In this paper, we have five different source problems, and we transfer the knowledge to the target problem. We studied different scenarios of the target problem. The results showed that combined knowledge from multiple source problems improves the GA performance. Also, the process of combining knowledge from several problems results in promoting diversity of the transferred population.

Keywords: Transfer Learning, Multiple Sources, Knowledge Transfer, Domain Adaptation, Source, Target.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 269
450 JaCoText: A Pretrained Model for Java Code-Text Generation

Authors: Jessica Lòpez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri

Abstract:

Pretrained transformer-based models have shown high performance in natural language generation task. However, a new wave of interest has surged: automatic programming language generation. This task consists of translating natural language instructions to a programming code. Despite the fact that well-known pretrained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformers neural network. It aims to generate java source code from natural language text. JaCoText leverages advantages of both natural language and code generation models. More specifically, we study some findings from the state of the art and use them to (1) initialize our model from powerful pretrained models, (2) explore additional pretraining on our java dataset, (3) carry out experiments combining the unimodal and bimodal data in the training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.

Keywords: Java code generation, Natural Language Processing, Sequence-to-sequence Models, Transformers Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
449 Analyzing the Relationship between the Systems Decisions Process and Artificial Intelligence: A Machine Vision Case Study

Authors: Mitchell J. McHugh, John J. Case

Abstract:

Systems engineering is a holistic discipline that seeks to organize and optimize complex, interdisciplinary systems. With the growth of artificial intelligence, systems engineers must face the challenge of leveraging artificial intelligence systems to solve complex problems. This paper analyzes the integration of systems engineering and artificial intelligence and discusses how artificial intelligence systems embody the systems decision process (SDP). The SDP is a four-stage problem-solving framework that outlines how systems engineers can design and implement solutions using value-focused thinking. This paper argues that artificial intelligence models can replicate the SDP, thus validating its flexible, value-focused foundation. The authors demonstrate this by developing a machine vision mobile application that can classify weapons to augment the decision-making role of an Army subject matter expert. This practical application was an end-to-end design challenge that highlights how artificial intelligence systems embody systems engineering principles. The impact of this research demonstrates that the SDP is a dynamic tool that systems engineers should leverage when incorporating artificial intelligence within the systems that they develop.

Keywords: Computer vision, machine learning, mobile application, systems engineering, systems decision process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680