Search results for: generative adversarial networks
2063 Fault Diagnosis of Nonlinear Systems Using Dynamic Neural Networks
Authors: E. Sobhani-Tehrani, K. Khorasani, N. Meskin
Abstract:
This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPE) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FP) that are indicators of faults in the system. Two NPE structures including series-parallel and parallel are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. On the contrary, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the NPEs to systems with partial-state measurement.Keywords: hybrid fault diagnosis, dynamic neural networks, nonlinear systems, fault tolerant observer
Procedia PDF Downloads 4012062 Multichannel Scheme under Fairness Environment for Cognitive Radio Networks
Authors: Hans Marquez Ramos, Cesar Hernandez, Ingrid Páez
Abstract:
This paper develops a multiple channel assignment model, which allows to take advantage in most efficient way, spectrum opportunities in cognitive radio networks. Developed scheme allows make several available and frequency adjacent channel assignments, which require a bigger wide band, under an equality environment. The hybrid assignment model it is made by to algorithms, one who makes the ranking and select available frequency channels and the other one in charge of establishing an equality criteria, in order to not restrict spectrum opportunities for all other secondary users who wish to make transmissions. Measurements made were done for average bandwidth, average delay, as well fairness computation for several channel assignment. Reached results were evaluated with experimental spectrum occupational data from GSM frequency band captured. Developed model, shows evidence of improvement in spectrum opportunity use and a wider average transmit bandwidth for each secondary user, maintaining equality criteria in channel assignment.Keywords: bandwidth, fairness, multichannel, secondary users
Procedia PDF Downloads 5052061 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance
Procedia PDF Downloads 4902060 Effect of Monotonically Decreasing Parameters on Margin Softmax for Deep Face Recognition
Authors: Umair Rashid
Abstract:
Normally softmax loss is used as the supervision signal in face recognition (FR) system, and it boosts the separability of features. In the last two years, a number of techniques have been proposed by reformulating the original softmax loss to enhance the discriminating power of Deep Convolutional Neural Networks (DCNNs) for FR system. To learn angularly discriminative features Cosine-Margin based softmax has been adjusted as monotonically decreasing angular function, that is the main challenge for angular based softmax. On that issue, we propose monotonically decreasing element for Cosine-Margin based softmax and also, we discussed the effect of different monotonically decreasing parameters on angular Margin softmax for FR system. We train the model on publicly available dataset CASIA- WebFace via our proposed monotonically decreasing parameters for cosine function and the tests on YouTube Faces (YTF, Labeled Face in the Wild (LFW), VGGFace1 and VGGFace2 attain the state-of-the-art performance.Keywords: deep convolutional neural networks, cosine margin face recognition, softmax loss, monotonically decreasing parameter
Procedia PDF Downloads 1012059 Review on Implementation of Artificial Intelligence and Machine Learning for Controlling Traffic and Avoiding Accidents
Authors: Neha Singh, Shristi Singh
Abstract:
Accidents involving motor vehicles are more likely to cause serious injuries and fatalities. It also has a host of other perpetual issues, such as the regular loss of life and goods in accidents. To solve these issues, appropriate measures must be implemented, such as establishing an autonomous incident detection system that makes use of machine learning and artificial intelligence. In order to reduce traffic accidents, this article examines the overview of artificial intelligence and machine learning in autonomous event detection systems. The paper explores the major issues, prospective solutions, and use of artificial intelligence and machine learning in road transportation systems for minimising traffic accidents. There is a lot of discussion on additional, fresh, and developing approaches that less frequent accidents in the transportation industry. The study structured the following subtopics specifically: traffic management using machine learning and artificial intelligence and an incident detector with these two technologies. The internet of vehicles and vehicle ad hoc networks, as well as the use of wireless communication technologies like 5G wireless networks and the use of machine learning and artificial intelligence for the planning of road transportation systems, are elaborated. In addition, safety is the primary concern of road transportation. Route optimization, cargo volume forecasting, predictive fleet maintenance, real-time vehicle tracking, and traffic management, according to the review's key conclusions, are essential for ensuring the safety of road transportation networks. In addition to highlighting research trends, unanswered problems, and key research conclusions, the study also discusses the difficulties in applying artificial intelligence to road transport systems. Planning and managing the road transportation system might use the work as a resource.Keywords: artificial intelligence, machine learning, incident detector, road transport systems, traffic management, automatic incident detection, deep learning
Procedia PDF Downloads 1142058 Estimating Occupancy in Residential Context Using Bayesian Networks for Energy Management
Authors: Manar Amayri, Hussain Kazimi, Quoc-Dung Ngo, Stephane Ploix
Abstract:
A general approach is proposed to determine occupant behavior (occupancy and activity) in residential buildings and to use these estimates for improved energy management. Occupant behaviour is modelled with a Bayesian Network in an unsupervised manner. This algorithm makes use of domain knowledge gathered via questionnaires and recorded sensor data for motion detection, power, and hot water consumption as well as indoor CO₂ concentration. Two case studies are presented which show the real world applicability of estimating occupant behaviour in this way. Furthermore, experiments integrating occupancy estimation and hot water production control show that energy efficiency can be increased by roughly 5% over known optimal control techniques and more than 25% over rule-based control while maintaining the same occupant comfort standards. The efficiency gains are strongly correlated with occupant behaviour and accuracy of the occupancy estimates.Keywords: energy, management, control, optimization, Bayesian methods, learning theory, sensor networks, knowledge modelling and knowledge based systems, artificial intelligence, buildings
Procedia PDF Downloads 3702057 Enriched Education: The Classroom as a Learning Network through Video Game Narrative Development
Authors: Wayne DeFehr
Abstract:
This study is rooted in a pedagogical approach that emphasizes student engagement as fundamental to meaningful learning in the classroom. This approach creates a paradigmatic shift, from a teaching practice that reinforces the teacher’s central authority to a practice that disperses that authority among the students in the classroom through networks that they themselves develop. The methodology of this study about creating optimal conditions for learning in the classroom includes providing a conceptual framework within which the students work, as well as providing clearly stated expectations for work standards, content quality, group methodology, and learning outcomes. These learning conditions are nurtured in a variety of ways. First, nearly every class includes a lecture from the professor with key concepts that students need in order to complete their work successfully. Secondly, students build on this scholarly material by forming their own networks, where students face each other and engage with each other in order to collaborate their way to solving a particular problem relating to the course content. Thirdly, students are given short, medium, and long-term goals. Short term goals relate to the week’s topic and involve workshopping particular issues relating to that stage of the course. The medium-term goals involve students submitting term assignments that are evaluated according to a well-defined rubric. And finally, long-term goals are achieved by creating a capstone project, which is celebrated and shared with classmates and interested friends on the final day of the course. The essential conclusions of the study are drawn from courses that focus on video game narrative. Enthusiastic student engagement is created not only with the dynamic energy and expertise of the instructor, but also with the inter-dependence of the students on each other to build knowledge, acquire skills, and achieve successful results.Keywords: collaboration, education, learning networks, video games
Procedia PDF Downloads 1162056 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 2092055 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System
Authors: Abdul-Rahman Al-Ali
Abstract:
As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances
Procedia PDF Downloads 3242054 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 3182053 Privacy Preservation Concerns and Information Disclosure on Social Networks: An Ongoing Research
Authors: Aria Teimourzadeh, Marc Favier, Samaneh Kakavand
Abstract:
The emergence of social networks has revolutionized the exchange of information. Every behavior on these platforms contributes to the generation of data known as social network data that are processed, stored and published by the social network service providers. Hence, it is vital to investigate the role of these platforms in user data by considering the privacy measures, especially when we observe the increased number of individuals and organizations engaging with the current virtual platforms without being aware that the data related to their positioning, connections and behavior is uncovered and used by third parties. Performing analytics on social network datasets may result in the disclosure of confidential information about the individuals or organizations which are the members of these virtual environments. Analyzing separate datasets can reveal private information about relationships, interests and more, especially when the datasets are analyzed jointly. Intentional breaches of privacy is the result of such analysis. Addressing these privacy concerns requires an understanding of the nature of data being accumulated and relevant data privacy regulations, as well as motivations for disclosure of personal information on social network platforms. Some significant points about how user's online information is controlled by the influence of social factors and to what extent the users are concerned about future use of their personal information by the organizations, are highlighted in this paper. Firstly, this research presents a short literature review about the structure of a network and concept of privacy in Online Social Networks. Secondly, the factors of user behavior related to privacy protection and self-disclosure on these virtual communities are presented. In other words, we seek to demonstrates the impact of identified variables on user information disclosure that could be taken into account to explain the privacy preservation of individuals on social networking platforms. Thirdly, a few research directions are discussed to address this topic for new researchers.Keywords: information disclosure, privacy measures, privacy preservation, social network analysis, user experience
Procedia PDF Downloads 2822052 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 1692051 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 5122050 Static Priority Approach to Under-Frequency Based Load Shedding Scheme in Islanded Industrial Networks: Using the Case Study of Fatima Fertilizer Company Ltd - FFL
Authors: S. H. Kazmi, T. Ahmed, K. Javed, A. Ghani
Abstract:
In this paper static scheme of under-frequency based load shedding is considered for chemical and petrochemical industries with islanded distribution networks relying heavily on the primary commodity to ensure minimum production loss, plant downtime or critical equipment shutdown. A simplistic methodology is proposed for in-house implementation of this scheme using underfrequency relays and a step by step guide is provided including the techniques to calculate maximum percentage overloads, frequency decay rates, time based frequency response and frequency based time response of the system. Case study of FFL electrical system is utilized, presenting the actual system parameters and employed load shedding settings following the similar series of steps. The arbitrary settings are then verified for worst overload conditions (loss of a generation source in this case) and comprehensive system response is then investigated.Keywords: islanding, under-frequency load shedding, frequency rate of change, static UFLS
Procedia PDF Downloads 4862049 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 762048 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles
Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan
Abstract:
Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks
Procedia PDF Downloads 552047 Finding the Optimal Meeting Point Based on Travel Plans in Road Networks
Authors: Mohammad H. Ahmadi, Vahid Haghighatdoost
Abstract:
Given a set of source locations for a group of friends, and a set of trip plans for each group member as a sequence of Categories-of-Interests (COIs) (e.g., restaurant), and finally a specific COI as a common destination that all group members will gather together, in Meeting Point Based on Trip Plans (MPTPs) queries our goal is to find a Point-of-Interest (POI) from different COIs, such that the aggregate travel distance for the group is minimized. In this work, we considered two cases for aggregate function as Sum and Max. For solving this query, we propose an efficient pruning technique for shrinking the search space. Our approach contains three steps. In the first step, it prunes the search space around the source locations. In the second step, it prunes the search space around the centroid of source locations. Finally, we compute the intersection of all pruned areas as the final refined search space. We prove that the POIs beyond the refined area cannot be part of optimal answer set. The paper also covers an extensive performance study of the proposed technique.Keywords: meeting point, trip plans, road networks, spatial databases
Procedia PDF Downloads 1852046 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks
Authors: Antonio Pizzarello, Oris Friesen
Abstract:
Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition
Procedia PDF Downloads 2232045 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 402044 Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach
Authors: Tiago do Carmo Nogueira, Cássio Dener Noronha Vinhal, Gélson da Cruz Júnior, Matheus Rudolfo Diedrich Ullmann
Abstract:
Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models.Keywords: gated recurrent units, caption generation, convolutional neural network, part-of-speech
Procedia PDF Downloads 1022043 Global Mittag-Leffler Stability of Fractional-Order Bidirectional Associative Memory Neural Network with Discrete and Distributed Transmission Delays
Authors: Swati Tyagi, Syed Abbas
Abstract:
Fractional-order Hopfield neural networks are generally used to model the information processing among the interacting neurons. To show the constancy of the processed information, it is required to analyze the stability of these systems. In this work, we perform Mittag-Leffler stability for the corresponding Caputo fractional-order bidirectional associative memory (BAM) neural networks with various time-delays. We derive sufficient conditions to ensure the existence and uniqueness of the equilibrium point by using the theory of topological degree theory. By applying the fractional Lyapunov method and Mittag-Leffler functions, we derive sufficient conditions for the global Mittag-Leffler stability, which further imply the global asymptotic stability of the network equilibrium. Finally, we present two suitable examples to show the effectiveness of the obtained results.Keywords: bidirectional associative memory neural network, existence and uniqueness, fractional-order, Lyapunov function, Mittag-Leffler stability
Procedia PDF Downloads 3652042 Synchronization of Two Mobile Robots
Authors: R. M. López-Gutiérrez, J. A. Michel-Macarty, H. Cervantes-De Avila, J. I. Nieto-Hipólito, C. Cruz-Hernández, L. Cardoza-Avendaño, S. Cortiant-Velez
Abstract:
It is well know that mankind benefits from the application of robot control by virtual handlers in industrial environments. In recent years, great interest has emerged in the control of multiple robots in order to carry out collective tasks. One main trend is to copy the natural organization that some organisms have, such as, ants, bees, school of fish, birds’ migration, etc. Surely, this collaborative work, results in better outcomes than those obtain in an isolated or individual effort. This topic has a great drive because collaboration between several robots has the potential capability of carrying out more complicated tasks, doing so, with better efficiency, resiliency and fault tolerance, in cases such as: coordinate navigation towards a target, terrain exploration, and search-rescue operations. In this work, synchronization of multiple autonomous robots is shown over a variety of coupling topologies: star, ring, chain, and global. In all cases, collective synchronous behavior is achieved, in the complex networks formed with mobile robots. Nodes of these networks are modeled by a mass using Matlab to simulate them.Keywords: robots, synchronization, bidirectional, coordinate navigation
Procedia PDF Downloads 3582041 Ultra Reliable Communication: Availability Analysis in 5G Cellular Networks
Authors: Yosra Benchaabene, Noureddine Boujnah, Faouzi Zarai
Abstract:
To meet the growing demand of users, the fifth generation (5G) will continue to provide services to higher data rates with higher carrier frequencies and wider bandwidths. As part of the 5G communication paradigm, Ultra Reliable Communication (URC) is envisaged as an important technology pillar for providing anywhere and anytime services to end users. Ultra Reliable Communication (URC) is considered an important technology that why it has become an active research topic. In this work, we analyze the availability of a service in the space domain. We characterize spatially available areas consisting of all locations that meet a performance requirement with confidence, and we define cell availability and system availability, individual user availability, and user-oriented system availability. Poisson point process (PPP) and Voronoi tessellation are adopted to model the spatial characteristics of a cell deployment in heterogeneous networks. Numerical results are presented, also highlighting the effect of different system parameters on the achievable link availability.Keywords: URC, dependability and availability, space domain analysis, Poisson point process, Voronoi Tessellation
Procedia PDF Downloads 1222040 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection
Authors: Amir Shahab Shahabi, Mohsen Hasirian
Abstract:
Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks
Procedia PDF Downloads 182039 Harnessing Artificial Intelligence and Machine Learning for Advanced Fraud Detection and Prevention
Authors: Avinash Malladhi
Abstract:
Forensic accounting is a specialized field that involves the application of accounting principles, investigative skills, and legal knowledge to detect and prevent fraud. With the rise of big data and technological advancements, artificial intelligence (AI) and machine learning (ML) algorithms have emerged as powerful tools for forensic accountants to enhance their fraud detection capabilities. In this paper, we review and analyze various AI/ML algorithms that are commonly used in forensic accounting, including supervised and unsupervised learning, deep learning, natural language processing Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Support Vector Machines (SVMs), Decision Trees, and Random Forests. We discuss their underlying principles, strengths, and limitations and provide empirical evidence from existing research studies demonstrating their effectiveness in detecting financial fraud. We also highlight potential ethical considerations and challenges associated with using AI/ML in forensic accounting. Furthermore, we highlight the benefits of these technologies in improving fraud detection and prevention in forensic accounting.Keywords: AI, machine learning, forensic accounting & fraud detection, anti money laundering, Benford's law, fraud triangle theory
Procedia PDF Downloads 932038 Monitoring Cellular Networks Performance Using Crowd Sourced IoT System: My Operator Coverage (MOC)
Authors: Bassem Boshra Thabet, Mohammed Ibrahim Elsabagh, Mohammad Adly Talaat
Abstract:
The number of cellular mobile phone users has increased enormously worldwide over the last two decades. Consequently, the monitoring of the performance of the Mobile Network Operators (MNOs) in terms of network coverage and broadband signal strength has become vital for both of the MNOs and regulators. This monitoring helps telecommunications operators and regulators keeping the market playing fair and most beneficial for users. However, the adopted methodologies to facilitate this continuous monitoring process are still problematic regarding cost, effort, and reliability. This paper introduces My Operator Coverage (MOC) system that is using Internet of Things (IoT) concepts and tools to monitor the MNOs performance using a crowd-sourced real-time methodology. MOC produces robust and reliable geographical maps for the user-perceived quality of the MNOs performance. MOC is also meant to enrich the telecommunications regulators with concrete, and up-to-date information that allows for adequate mobile market management strategies as well as appropriate decision making.Keywords: mobile performance monitoring, crowd-sourced applications, mobile broadband performance, cellular networks monitoring
Procedia PDF Downloads 3972037 Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation
Authors: Ksenia Meshkova
Abstract:
With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances.Keywords: neural networks, computer vision, representation learning, autoencoders
Procedia PDF Downloads 1272036 A Study on Vulnerability of Alahsa Governorate to Generate Urban Heat Islands
Authors: Ilham S. M. Elsayed
Abstract:
The purpose of this study is to investigate Alahsa Governorate status and its vulnerability to generate urban heat islands. Alahsa Governorate is a famous oasis in the Arabic Peninsula including several oil centers. Extensive literature review was done to collect previous relative data on the urban heat island of Alahsa Governorate. Data used for the purpose of this research were collected from authorized bodies who control weather station networks over Alahsa Governorate, Eastern Province, Saudi Arabia. Although, the number of weather station networks within the region is very limited and the analysis using GIS software and its techniques is difficult and limited, the data analyzed confirm an increase in temperature for more than 2 °C from 2004 to 2014. Such increase is considerable whenever human health and comfort are the concern. The increase of temperature within one decade confirms the availability of urban heat islands. The study concludes that, Alahsa Governorate is vulnerable to create urban heat islands and more attention should be drawn to strategic planning of the governorate that is developing with a high pace and considerable increasing levels of urbanization.Keywords: Alahsa Governorate, population density, Urban Heat Island, weather station
Procedia PDF Downloads 2502035 Disaster Management Using Wireless Sensor Networks
Authors: Akila Murali, Prithika Manivel
Abstract:
Disasters are defined as a serious disruption of the functioning of a community or a society, which involves widespread human, material, economic or environmental impacts. The number of people suffering food crisis as a result of natural disasters has tripled in the last thirty years. The economic losses due to natural disasters have shown an increase with a factor of eight over the past four decades, caused by the increased vulnerability of the global society, and also due to an increase in the number of weather-related disasters. Efficient disaster detection and alerting systems could reduce the loss of life and properties. In the event of a disaster, another important issue is a good search and rescue system with high levels of precision, timeliness and safety for both the victims and the rescuers. Wireless Sensor Networks technology has the capability of quick capturing, processing, and transmission of critical data in real-time with high resolution. This paper studies the capacity of sensors and a Wireless Sensor Network to collect, collate and analyze valuable and worthwhile data, in an ordered manner to help with disaster management.Keywords: alerting systems, disaster detection, Ad Hoc network, WSN technology
Procedia PDF Downloads 4042034 Integrated Grey Rational Analysis-Standard Deviation Method for Handover in Heterogeneous Networks
Authors: Mohanad Alhabo, Naveed Nawaz, Mahmoud Al-Faris
Abstract:
The dense deployment of small cells is a promising solution to enhance the coverage and capacity of the heterogeneous networks (HetNets). However, the unplanned deployment could bring new challenges to the network ranging from interference, unnecessary handovers and handover failures. This will cause a degradation in the quality of service (QoS) delivered to the end user. In this paper, we propose an integrated Grey Rational Analysis Standard Deviation based handover method (GRA-SD) for HetNet. The proposed method integrates the Standard Deviation (SD) technique to acquire the weight of the handover metrics and the GRA method to select the best handover base station. The performance of the GRA-SD method is evaluated and compared with the traditional Multiple Attribute Decision Making (MADM) methods including Simple Additive Weighting (SAW) and VIKOR methods. Results reveal that the proposed method has outperformed the other methods in terms of minimizing the number of frequent unnecessary handovers and handover failures, in addition to improving the energy efficiency.Keywords: energy efficiency, handover, HetNets, MADM, small cells
Procedia PDF Downloads 116