Search results for: multi-objective evolutionary algorithms "MOEA"
1690 Computational Neurosciences: An Inspiration from Biological Neurosciences
Authors: Harsh Sadawarti, Kamal Malik
Abstract:
Humans are the unique and the most powerful creature on this planet just because of the high level of intelligence gifted by nature. Computational Intelligence is highly influenced by the term natural intelligence, neurosciences and mathematics. To deal with the in-depth study of computational intelligence and to utilize it in real-life applications, it is quite important to understand its simulation with the human brain. In this paper, the three important parts, Frontal Lobe, Occipital Lobe and Parietal Lobe of the human brain, are compared with the ANN(Artificial Neural Network), CNN(Convolutional Neural network), and RNN(Recurrent Neural Network), respectively. Intelligent computational systems are created by combining deductive reasoning, logical concepts and high-level algorithms with the simulation and study of the human brain. Human brain is a combination of Physiology, Psychology, emotions, calculations and many other parameters which are of utmost importance that determines the overall intelligence. To create intelligent algorithms, smart machines and to simulate the human brain in an effective manner, it is quite important to have an insight into the human brain and the basic concepts of biological neurosciences.Keywords: computational intelligence, neurosciences, convolutional neural network, recurrent neural network, artificial neural network, frontal lobe, occipital lobe, parietal lobe
Procedia PDF Downloads 1081689 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50
Procedia PDF Downloads 1271688 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module
Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati
Abstract:
The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.Keywords: incremental conductance algorithm, perturb and observe algorithm, photovoltaic system, simulation results
Procedia PDF Downloads 5541687 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO
Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu
Abstract:
Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO
Procedia PDF Downloads 861686 An Effective and Efficient Web Platform for Monitoring, Control, and Management of Drones Supported by a Microservices Approach
Authors: Jorge R. Santos, Pedro Sebastiao
Abstract:
In recent years there has been a great growth in the use of drones, being used in several areas such as security, agriculture, or research. The existence of some systems that allow the remote control of drones is a reality; however, these systems are quite simple and directed to specific functionality. This paper proposes the development of a web platform made in Vue.js and Node.js to control, manage, and monitor drones in real time. Using a microservice architecture, the proposed project will be able to integrate algorithms that allow the optimization of processes. Communication with remote devices is suggested via HTTP through 3G, 4G, and 5G networks and can be done in real time or by scheduling routes. This paper addresses the case of forest fires as one of the services that could be included in a system similar to the one presented. The results obtained with the elaboration of this project were a success. The communication between the web platform and drones allowed its remote control and monitoring. The incorporation of the fire detection algorithm in the platform proved possible a real time analysis of the images captured by the drone without human intervention. The proposed system has proved to be an asset to the use of drones in fire detection. The architecture of the application developed allows other algorithms to be implemented, obtaining a more complex application with clear expansion.Keywords: drone control, microservices, node.js, unmanned aerial vehicles, vue.js
Procedia PDF Downloads 1471685 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 1961684 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1891683 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 61682 The Trajectory of the Ball in Football Game
Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar
Abstract:
Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter
Procedia PDF Downloads 4551681 Estimation of Transition and Emission Probabilities
Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi
Abstract:
Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics
Procedia PDF Downloads 4791680 Telecontrolled Service Robots for Increasing the Quality of Life of Elderly and Disabled
Authors: Nayden Chivarov, Denis Chikurtev, Kaloyan Yovchev, Nedko Shivarov
Abstract:
This paper represents methods for improving the efficiency and precision of service mobile robot. This robot is used for increasing the quality of life of elderly and disabled people. The key concept of the proposed Intelligent Service Mobile Robot is its easier adaptability to achieve services for a wide range of Elderly or Disabled Person’s needs, by performing different tasks for supporting Elderly or Disabled Persons care. We developed robot autonomous navigation and computer vision systems in order to recognize different objects and bring them to the people. Web based user interface is developed to provide easy access and tele-control of the robot by any device through the internet. In this study algorithms for object recognition and localization are proposed for providing successful object recognition and accuracy in the positioning. Different methods for sending movement commands to the mobile robot system are proposed and evaluated. After executing some experiments to show the results of the research, we can summarize that these systems and algorithms provide good control of the service mobile robot and it will be more useful to help the elderly and disabled persons.Keywords: service robot, mobile robot, autonomous navigation, computer vision, web user interface, ROS
Procedia PDF Downloads 3381679 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)
Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves
Abstract:
The modeling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high-resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve denser and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high-resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.Keywords: 3D models, environment, matching, pleiades
Procedia PDF Downloads 3281678 Artificial Intelligence and Governance in Relevance to Satellites in Space
Authors: Anwesha Pathak
Abstract:
With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.Keywords: satellite, space debris, traffic, threats, cyber security.
Procedia PDF Downloads 741677 A Comparative Approach for Modeling the Toxicity of Metal Mixtures in Two Ecologically Related Three-Spined (Gasterosteus aculeatus L.) And Nine-Spined (Pungitius pungitius L.) Sticklebacks
Authors: Tomas Makaras
Abstract:
Sticklebacks (Gasterosteiformes) are increasingly used in ecological and evolutionary research and become well-established role as model species for biologists. However, ecotoxicology studies concerning behavioural effects in sticklebacks regarding stress responses, mainly induced by chemical mixtures, have hardly been addressed. Moreover, although many authors in their studies emphasised the similarity between three-spined and nine-spined stickleback in morphological, neuroanatomical and behavioural adaptations to environmental changes, several comparative studies have revealed considerable differences between these species in and their susceptibility and resistance to variousstressors in laboratory experiments. The hypothesis of this study was that three-spined and nine-spined stickleback species will demonstrate apparent differences in response patterns and sensitivity to metal-based chemicals stimuli. For this purpose, we investigated the swimming behaviour (including mortality rate based on 96-h LC50 values) of two ecologically similar three-spined (Gasterosteusaculeatus) and nine-spined sticklebacks (Pungitiuspungitius) to short-term (up to 24 h) metal mixture (MIX) exposure. We evaluated the relevance and efficacy of behavioural responses of test species in the early toxicity assessment of chemical mixtures. Fish exposed to six (Zn, Pb, Cd, Cu, Ni and Cr) metals in the mixture were either singled out by the Water Framework Directive as priority or as relevant substances in surface water, which was prepared according to the environmental quality standards (EQSs) of these metals set for inland waters in the European Union (EU) (Directive 2013/39/EU). Based on acute toxicity results, G. aculeatus found to be slightly (1.4-fold) more tolerant of MIX impact than those of P. pungitius specimens. The performed behavioural analysis showed the main effect on the interaction between time, species and treatment variables. Although both species exposed to MIX revealed a decreasing tendency in swimming activity, these species’ responsiveness to MIX was somewhat different. Substantial changes in the activity of G. aculeatus were established after 3-h exposure to MIX solutions, which was 1.43-fold lower, while in the case of P. pungitius, 1.96-fold higher than established 96-h LC50 values for each species. This study demonstrated species-specific differences in response sensitivity to metal-based water pollution, indicating behavioural insensitivity of P. pungitiuscompared to G. aculeatus. While many studies highlight the usefulness and suitability of nine-spined sticklebacks for evolutionary and ecological research, attested by their increasing popularity in these fields, great caution must be exercised when using them as model species in ecotoxicological research to probe metal contamination. Meanwhile, G. aculeatus showed to be a promising bioindicator species in the environmental ecotoxicology field.Keywords: acute toxicity, comparative behaviour, metal mixture, swimming activity
Procedia PDF Downloads 1601676 On the Theory of Persecution
Authors: Aleksander V. Zakharov, Marat R. Bogdanov, Ramil F. Malikov, Irina N. Dumchikova
Abstract:
Classification of persecution movement laws is proposed. Modes of persecution in number of specific cases were researched. Modes of movement control using GLONASS/GPS are discussed.Keywords: UAV Management, mathematical algorithms of targeting and persecution, GLONASS, GPS
Procedia PDF Downloads 3421675 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 4501674 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element
Procedia PDF Downloads 691673 An MrPPG Method for Face Anti-Spoofing
Authors: Lan Zhang, Cailing Zhang
Abstract:
In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms.Keywords: face anti-spoofing, face presentation attack detection, remote photoplethysmography, MrPPG
Procedia PDF Downloads 1771672 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 941671 Quantum Cryptography: Classical Cryptography Algorithms’ Vulnerability State as Quantum Computing Advances
Authors: Tydra Preyear, Victor Clincy
Abstract:
Quantum computing presents many computational advantages over classical computing methods due to the utilization of quantum mechanics. The capability of this computing infrastructure poses threats to standard cryptographic systems such as RSA and AES, which are designed for classical computing environments. This paper discusses the impact that quantum computing has on cryptography, while focusing on the evolution from classical cryptographic concepts to quantum and post-quantum cryptographic concepts. Standard Cryptography is essential for securing data by utilizing encryption and decryption methods, and these methods face vulnerability problems due to the advancement of quantum computing. In order to counter these vulnerabilities, the methods that are proposed are quantum cryptography and post-quantum cryptography. Quantum cryptography uses principles such as the uncertainty principle and photon polarization in order to provide secure data transmission. In addition, the concept of Quantum key distribution is introduced to ensure more secure communication channels by distributing cryptographic keys. There is the emergence of post-quantum cryptography which is used for improving cryptographic algorithms in order to be more secure from attacks by classical and quantum computers. Throughout this exploration, the paper mentions the critical role of the advancement of cryptographic methods to keep data integrity and privacy safe from quantum computing concepts. Future research directions that would be discussed would be more effective cryptographic methods through the advancement of technology.Keywords: quantum computing, quantum cryptography, cryptography, data integrity and privacy
Procedia PDF Downloads 201670 Identification of Biological Pathways Causative for Breast Cancer Using Unsupervised Machine Learning
Authors: Karthik Mittal
Abstract:
This study performs an unsupervised machine learning analysis to find clusters of related SNPs which highlight biological pathways that are important for the biological mechanisms of breast cancer. Studying genetic variations in isolation is illogical because these genetic variations are known to modulate protein production and function; the downstream effects of these modifications on biological outcomes are highly interconnected. After extracting the SNPs and their effect on different types of breast cancer using the MRBase library, two unsupervised machine learning clustering algorithms were implemented on the genetic variants: a k-means clustering algorithm and a hierarchical clustering algorithm; furthermore, principal component analysis was executed to visually represent the data. These algorithms specifically used the SNP’s beta value on the three different types of breast cancer tested in this project (estrogen-receptor positive breast cancer, estrogen-receptor negative breast cancer, and breast cancer in general) to perform this clustering. Two significant genetic pathways validated the clustering produced by this project: the MAPK signaling pathway and the connection between the BRCA2 gene and the ESR1 gene. This study provides the first proof of concept showing the importance of unsupervised machine learning in interpreting GWAS summary statistics.Keywords: breast cancer, computational biology, unsupervised machine learning, k-means, PCA
Procedia PDF Downloads 1451669 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing
Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello
Abstract:
In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction
Procedia PDF Downloads 4251668 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights
Authors: Olga Kokoulina
Abstract:
Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.Keywords: algorithms, public interest, trade secrets, transparency
Procedia PDF Downloads 1241667 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect
Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy
Abstract:
Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.Keywords: genetic algorithms, economic dispatch, pattern search
Procedia PDF Downloads 4431666 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State
Authors: F. Mohammadsadeghi
Abstract:
Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine
Procedia PDF Downloads 4081665 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1101664 Approaching the Spatial Multi-Objective Land Use Planning Problems at Mountain Areas by a Hybrid Meta-Heuristic Optimization Technique
Authors: Konstantinos Tolidis
Abstract:
The mountains are amongst the most fragile environments in the world. The world’s mountain areas cover 24% of the Earth’s land surface and are home to 12% of the global population. A further 14% of the global population is estimated to live in the vicinity of their surrounding areas. As urbanization continues to increase in the world, the mountains are also key centers for recreation and tourism; their attraction is often heightened by their remarkably high levels of biodiversity. Due to the fact that the features in mountain areas vary spatially (development degree, human geography, socio-economic reality, relations of dependency and interaction with other areas-regions), the spatial planning on these areas consists of a crucial process for preserving the natural, cultural and human environment and consists of one of the major processes of an integrated spatial policy. This research has been focused on the spatial decision problem of land use allocation optimization which is an ordinary planning problem on the mountain areas. It is a matter of fact that such decisions must be made not only on what to do, how much to do, but also on where to do, adding a whole extra class of decision variables to the problem when combined with the consideration of spatial optimization. The utility of optimization as a normative tool for spatial problem is widely recognized. However, it is very difficult for planners to quantify the weights of the objectives especially when these are related to mountain areas. Furthermore, the land use allocation optimization problems at mountain areas must be addressed not only by taking into account the general development objectives but also the spatial objectives (e.g. compactness, compatibility and accessibility, etc). Therefore, the main research’s objective was to approach the land use allocation problem by utilizing a hybrid meta-heuristic optimization technique tailored to the mountain areas’ spatial characteristics. The results indicates that the proposed methodological approach is very promising and useful for both generating land use alternatives for further consideration in land use allocation decision-making and supporting spatial management plans at mountain areas.Keywords: multiobjective land use allocation, mountain areas, spatial planning, spatial decision making, meta-heuristic methods
Procedia PDF Downloads 3451663 Evaluation of Monoterpenes Induction in Ugni molinae Ecotypes Subjected to a Red Grape Caterpillar (Lepidoptera: Arctiidae) Herbivory
Authors: Manuel Chacon-Fuentes, Leonardo Bardehle, Marcelo Lizama, Claudio Reyes, Andres Quiroz
Abstract:
The insect-plant interaction is a complex process in which the plant is able to release chemical signaling that modifies the behavior of insects. Insect herbivory can trigger mechanisms that allow the increase in the production of secondary metabolites that allow coping against the herbivores. Monoterpenes are a kind of secondary metabolites involved in direct defense acting as repellents of herbivorous or even in indirect defense acting as attractants for insect predators. In addition, an increase of the monoterpenes concentration is an effect commonly associated with the herbivory. Hence, plants subjected to damage by herbivory increase the monoterpenes production in comparison to plants without herbivory. In this framework, co-evolutionary aspects play a fundamental role in the adaptation of the herbivorous to their host and in the counter-adaptive strategies of the plants to avoid the herbivorous. In this context, Ugni molinae 'murtilla' is a native shrub from Chile characterized by its antioxidant activity mainly related to the phenolic compounds presents in its fruits. The larval stage of the red grape caterpillar Chilesia rudis Butler (Lepidoptera: Arctiidae) has been reported as an important defoliator of U. molinae. This insect is native from Chile and probably has been involved in a co-evolutionary process with murtilla. Therefore, we hypothesized that herbivory by the red grape caterpillar increases the emission of monoterpenes in Ugni molinae. Ecotypes 19-1 and 22-1 of murtilla were established and maintained at 25° C in the Laboratorio de Química Ecológica at Universidad de La Frontera. Red grape caterpillars of ⁓40 mm were collected near to Temuco (Chile) from grasses, and they were deprived of food for 24 h before performing the assays. Ten caterpillars were placed on the foliage of the ecotypes 19-1 and 22-1 and allowed to feed during 48 h. After this time, caterpillars were removed from the ecotypes and monoterpenes were collected. A glass chamber was used to enclose the ecotypes and a Porapak-Q column was used to trap the monoterpenes. After 24 h of capturing, columns were desorbed with hexane. Then, samples were injected in a gas chromatograph coupled to mass spectrometer and monoterpenes were determined according to the NIST library. All the experiments were performed in triplicate. Results showed that α-pinene, β-phellandrene, limonene, and 1,8 cineole were the main monoterpenes released by murtilla ecotypes. For the ecotype 19-1, the abundance of α-pinene was significantly higher in plants subjected to herbivory (100%) in relation to control plants (54.58%). Moreover, β-phellandrene and 1,8 cineole were observed only in control plants. For ecotype 22-1, there was no significant difference in monoterpenes abundance. In conclusion, the results suggest a trade-off of β-phellandrene and 1,8 cineole in response to herbivory damage by red grape caterpillar generating an increase in α-pinene abundance.Keywords: Chilesia rudis, gas chromatography, monoterpenes, Ugni molinae
Procedia PDF Downloads 1491662 Centralizing the Teaching Process in Intelligent Tutoring System Architectures
Authors: Nikolaj Troels Graf Von Malotky, Robin Nicolay, Alke Martens
Abstract:
There exist a plethora of architectures for ITSs (Intelligent Tutoring Systems). A thorough analysis and comparison of the architectures revealed, that in most cases the architecture extensions are evolutionary grown, reflecting state of the art trends of each decade. However, from the perspective of software engineering, the main aspect of an ITS has not been reflected in any of these architectures, yet. From the perspective of cognitive research, the construction of the teaching process is what makes an ITS 'intelligent' regarding the spectrum of interaction with the students. Thus, in our approach, we focus on a behavior based architecture, which is based on the main teaching processes. To create a new general architecture for ITS, we have to define the prerequisites. This paper analyzes the current state of the existing architectures and derives rules for the behavior of ITS. It is presenting a teaching process for ITSs to be used together with the architecture.Keywords: intelligent tutoring, ITS, tutoring process, system architecture, interaction process
Procedia PDF Downloads 3801661 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 103