Search results for: disturbance tracking algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4610

Search results for: disturbance tracking algorithm

710 Blue Economy and Marine Mining

Authors: Fani Sakellariadou

Abstract:

The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.

Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts

Procedia PDF Downloads 83
709 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 535
708 Numerical Investigation of Beam-Columns Subjected to Non-Proportional Loadings under Ambient Temperature Conditions

Authors: George Adomako Kumi

Abstract:

The response of structural members, when subjected to various forms of non-proportional loading, plays a major role in the overall stability and integrity of a structure. This research seeks to present the outcome of a finite element investigation conducted by the use of finite element programming software ABAQUS to validate the experimental results of elastic and inelastic behavior and strength of beam-columns subjected to axial loading, biaxial bending, and torsion under ambient temperature conditions. The application of the rigorous and highly complicated ABAQUS finite element software will seek to account for material, non-linear geometry, deformations, and, more specifically, the contact behavior between the beam-columns and support surfaces. Comparisons of the three-dimensional model with the results of actual tests conducted and results from a solution algorithm developed through the use of the finite difference method will be established in order to authenticate the veracity of the developed model. The results of this research will seek to provide structural engineers with much-needed knowledge about the behavior of steel beam columns and their response to various non-proportional loading conditions under ambient temperature conditions.

Keywords: beam-columns, axial loading, biaxial bending, torsion, ABAQUS, finite difference method

Procedia PDF Downloads 180
707 Numerical Analysis of a Pilot Solar Chimney Power Plant

Authors: Ehsan Gholamalizadeh, Jae Dong Chung

Abstract:

Solar chimney power plant is a feasible solar thermal system which produces electricity from the Sun. The objective of this study is to investigate buoyancy-driven flow and heat transfer through a built pilot solar chimney system called 'Kerman Project'. The system has a chimney with the height and diameter of 60 m and 3 m, respectively, and the average radius of its solar collector is about 20 m, and also its average collector height is about 2 m. A three-dimensional simulation was conducted to analyze the system, using computational fluid dynamics (CFD). In this model, radiative transfer equation was solved using the discrete ordinates (DO) radiation model taking into account a non-gray radiation behavior. In order to modelling solar irradiation from the sun’s rays, the solar ray tracing algorithm was coupled to the computation via a source term in the energy equation. The model was validated with comparing to the experimental data of the Manzanares prototype and also the performance of the built pilot system. Then, based on the numerical simulations, velocity and temperature distributions through the system, the temperature profile of the ground surface and the system performance were presented. The analysis accurately shows the flow and heat transfer characteristics through the pilot system and predicts its performance.

Keywords: buoyancy-driven flow, computational fluid dynamics, heat transfer, renewable energy, solar chimney power plant

Procedia PDF Downloads 262
706 Fault Detection and Isolation in Sensors and Actuators of Wind Turbines

Authors: Shahrokh Barati, Reza Ramezani

Abstract:

Due to the countries growing attention to the renewable energy producing, the demand for energy from renewable energy has gone up among the renewable energy sources; wind energy is the fastest growth in recent years. In this regard, in order to increase the availability of wind turbines, using of Fault Detection and Isolation (FDI) system is necessary. Wind turbines include of various faults such as sensors fault, actuator faults, network connection fault, mechanical faults and faults in the generator subsystem. Although, sensors and actuators have a large number of faults in wind turbine but have discussed fewer in the literature. Therefore, in this work, we focus our attention to design a sensor and actuator fault detection and isolation algorithm and Fault-tolerant control systems (FTCS) for Wind Turbine. The aim of this research is to propose a comprehensive fault detection and isolation system for sensors and actuators of wind turbine based on data-driven approaches. To achieve this goal, the features of measurable signals in real wind turbine extract in any condition. The next step is the feature selection among the extract in any condition. The next step is the feature selection among the extracted features. Features are selected that led to maximum separation networks that implemented in parallel and results of classifiers fused together. In order to maximize the reliability of decision on fault, the property of fault repeatability is used.

Keywords: FDI, wind turbines, sensors and actuators faults, renewable energy

Procedia PDF Downloads 400
705 Deep Reinforcement Learning Model for Autonomous Driving

Authors: Boumaraf Malak

Abstract:

The development of intelligent transportation systems (ITS) and artificial intelligence (AI) are spurring us to pave the way for the widespread adoption of autonomous vehicles (AVs). This is open again opportunities for smart roads, smart traffic safety, and mobility comfort. A highly intelligent decision-making system is essential for autonomous driving around dense, dynamic objects. It must be able to handle complex road geometry and topology, as well as complex multiagent interactions, and closely follow higher-level commands such as routing information. Autonomous vehicles have become a very hot research topic in recent years due to their significant ability to reduce traffic accidents and personal injuries. Using new artificial intelligence-based technologies handles important functions in scene understanding, motion planning, decision making, vehicle control, social behavior, and communication for AV. This paper focuses only on deep reinforcement learning-based methods; it does not include traditional (flat) planar techniques, which have been the subject of extensive research in the past because reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. The DRL algorithm used so far found solutions to the four main problems of autonomous driving; in our paper, we highlight the challenges and point to possible future research directions.

Keywords: deep reinforcement learning, autonomous driving, deep deterministic policy gradient, deep Q-learning

Procedia PDF Downloads 85
704 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 144
703 Teaching Tools for Web Processing Services

Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr

Abstract:

Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.

Keywords: deegree, interpolation, IDW, web processing service (WPS)

Procedia PDF Downloads 355
702 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 370
701 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 420
700 The Effects of Adding Vibrotactile Feedback to Upper Limb Performance during Dual-Tasking and Response to Misleading Visual Feedback

Authors: Sigal Portnoy, Jason Friedman, Eitan Raveh

Abstract:

Introduction: Sensory substitution is possible due to the capacity of our brain to adapt to information transmitted by a synthetic receptor via an alternative sensory system. Practical sensory substitution systems are being developed in order to increase the functionality of individuals with sensory loss, e.g. amputees. For upper limb prosthetic-users the loss of tactile feedback compels them to allocate visual attention to their prosthesis. The effect of adding vibrotactile feedback (VTF) to the applied force has been studied, however its effect on the allocation if visual attention during dual-tasking and the response during misleading visual feedback have not been studied. We hypothesized that VTF will improve the performance and reduce visual attention during dual-task assignments in healthy individuals using a robotic hand and improve the performance in a standardized functional test, despite the presence of misleading visual feedback. Methods: For the dual-task paradigm, twenty healthy subjects were instructed to toggle two keyboard arrow keys with the left hand to retain a moving virtual car on a road on a screen. During the game, instructions for various activities, e.g. mix the sugar in the glass with a spoon, appeared on the screen. The subject performed these tasks with a robotic hand, attached to the right hand. The robotic hand was controlled by the activity of the flexors and extensors of the right wrist, recorded using surface EMG electrodes. Pressure sensors were attached at the tips of the robotic hand and induced VTF using vibrotactile actuators attached to the right arm of the subject. An eye-tracking system tracked to visual attention of the subject during the trials. The trials were repeated twice, with and without the VTF. Additionally, the subjects performed the modified box and blocks, hidden from eyesight, in a motion laboratory. A virtual presentation of a misleading visual feedback was be presented on a screen so that twice during the trial, the virtual block fell while the physical block was still held by the subject. Results: This is an ongoing study, which current results are detailed below. We are continuing these trials with transradial myoelectric prosthesis-users. In the healthy group, the VTF did not reduce the visual attention or improve performance during dual-tasking for the tasks that were typed transfer-to-target, e.g. place the eraser on the shelf. An improvement was observed for other tasks. For example, the average±standard deviation of time to complete the sugar-mixing task was 13.7±17.2s and 19.3±9.1s with and without the VTF, respectively. Also, the number of gaze shifts from the screen to the hand during this task were 15.5±23.7 and 20.0±11.6, with and without the VTF, respectively. The response of the subjects to the misleading visual feedback did not differ between the two conditions, i.e. with and without VTF. Conclusions: Our interim results suggest that the performance of certain activities of daily living may be improved by VTF. The substitution of visual sensory input by tactile feedback might require a long training period so that brain plasticity can occur and allow adaptation to the new condition.

Keywords: prosthetics, rehabilitation, sensory substitution, upper limb amputation

Procedia PDF Downloads 341
699 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 495
698 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance

Authors: Yash Bingi, Yiqiao Yin

Abstract:

Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.

Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations

Procedia PDF Downloads 144
697 Multimodal Content: Fostering Students’ Language and Communication Competences

Authors: Victoria L. Malakhova

Abstract:

The research is devoted to multimodal content and its effectiveness in developing students’ linguistic and intercultural communicative competences as an indefeasible constituent of their future professional activity. Description of multimodal content both as a linguistic and didactic phenomenon makes the study relevant. The objective of the article is the analysis of creolized texts and the effect they have on fostering higher education students’ skills and their productivity. The main methods used are linguistic text analysis, qualitative and quantitative methods, deduction, generalization. The author studies texts with full and partial creolization, their features and role in composing multimodal textual space. The main verbal and non-verbal markers and paralinguistic means that enhance the linguo-pragmatic potential of creolized texts are covered. To reveal the efficiency of multimodal content application in English teaching, the author conducts an experiment among both undergraduate students and teachers. This allows specifying main functions of creolized texts in the process of language learning, detecting ways of enhancing students’ competences, and increasing their motivation. The described stages of using creolized texts can serve as an algorithm for work with multimodal content in teaching English as a foreign language. The findings contribute to improving the efficiency of the academic process.

Keywords: creolized text, English language learning, higher education, language and communication competences, multimodal content

Procedia PDF Downloads 112
696 Exploring Public Opinions Toward the Use of Generative Artificial Intelligence Chatbot in Higher Education: An Insight from Topic Modelling and Sentiment Analysis

Authors: Samer Muthana Sarsam, Abdul Samad Shibghatullah, Chit Su Mon, Abd Aziz Alias, Hosam Al-Samarraie

Abstract:

Generative Artificial Intelligence chatbots (GAI chatbots) have emerged as promising tools in various domains, including higher education. However, their specific role within the educational context and the level of legal support for their implementation remain unclear. Therefore, this study aims to investigate the role of Bard, a newly developed GAI chatbot, in higher education. To achieve this objective, English tweets were collected from Twitter's free streaming Application Programming Interface (API). The Latent Dirichlet Allocation (LDA) algorithm was applied to extract latent topics from the collected tweets. User sentiments, including disgust, surprise, sadness, anger, fear, joy, anticipation, and trust, as well as positive and negative sentiments, were extracted using the NRC Affect Intensity Lexicon and SentiStrength tools. This study explored the benefits, challenges, and future implications of integrating GAI chatbots in higher education. The findings shed light on the potential power of such tools, exemplified by Bard, in enhancing the learning process and providing support to students throughout their educational journey.

Keywords: generative artificial intelligence chatbots, bard, higher education, topic modelling, sentiment analysis

Procedia PDF Downloads 83
695 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 446
694 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 195
693 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: fuzzy C-means clustering, fuzzy C-means clustering based attribute weighting, Pima Indians diabetes, SVM

Procedia PDF Downloads 413
692 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique

Authors: N. Guo, C. Xu, Z. C. Yang

Abstract:

In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.

Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search

Procedia PDF Downloads 161
691 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition

Authors: J. K. Adedeji, S. T. Ijatuyi

Abstract:

The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.

Keywords: gravitational resistance, neural network, non-linear, pattern recognition

Procedia PDF Downloads 212
690 Data Mining Approach: Classification Model Evaluation

Authors: Lubabatu Sada Sodangi

Abstract:

The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.

Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset

Procedia PDF Downloads 378
689 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks

Authors: T. Sattarpour, D. Nazarpour

Abstract:

This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.

Keywords: active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF)

Procedia PDF Downloads 301
688 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization

Procedia PDF Downloads 175
687 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: breast cancer, diagnosis, machine learning, biomarker classification, neural network

Procedia PDF Downloads 135
686 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks

Authors: Shidrokh Goudarzi, Wan Haslina Hassan

Abstract:

Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.

Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms

Procedia PDF Downloads 393
685 Machine Learning and Metaheuristic Algorithms in Short Femoral Stem Custom Design to Reduce Stress Shielding

Authors: Isabel Moscol, Carlos J. Díaz, Ciro Rodríguez

Abstract:

Hip replacement becomes necessary when a person suffers severe pain or considerable functional limitations and the best option to enhance their quality of life is through the replacement of the damaged joint. One of the main components in femoral prostheses is the stem which distributes the loads from the joint to the proximal femur. To preserve more bone stock and avoid weakening of the diaphysis, a short starting stem was selected, generated from the intramedullary morphology of the patient's femur. It ensures the implantability of the design and leads to geometric delimitation for personalized optimization with machine learning (ML) and metaheuristic algorithms. The present study attempts to design a cementless short stem to make the strain deviation before and after implantation close to zero, promoting its fixation and durability. Regression models developed to estimate the percentage change of maximum principal stresses were used as objective optimization functions by the metaheuristic algorithm. The latter evaluated different geometries of the short stem with the modification of certain parameters in oblique sections from the osteotomy plane. The optimized geometry reached a global stress shielding (SS) of 18.37% with a determination factor (R²) of 0.667. The predicted results favour implantability integration in the short stem optimization to effectively reduce SS in the proximal femur.

Keywords: machine learning techniques, metaheuristic algorithms, short-stem design, stress shielding, hip replacement

Procedia PDF Downloads 195
684 Seismic Hazard Study and Strong Ground Motion in Southwest Alborz, Iran

Authors: Fereshteh Pourmohammad, Mehdi Zare

Abstract:

The city of Karaj, having a population of 2.2 millions (est. 2022) is located in the South West of Alborz Mountain Belt in Northern Iran. The region is known to be a highly active seismic zone. This study is focused on the geological and seismological analyses within a radius of 200 km from the center of Karaj. There are identified five seismic zones and seven linear seismic sources. The maximum magnitude was calculated for the seismic zones. Scine tghe seismicity catalog is incomplete, we have used a parametric-historic algorithm and the Kijko and Sellevoll (1992) method was used to calculate seismicity parameters, and the return periods and the probability frequency of recurrence of the earthquake magnitude in each zone obtained for 475-years return period. According to the calculations, the highest and lowest earthquake magnitudes of 7.6 and 6.2 were respectively obtained in Zones 1 and 4. This result is a new and extremely important in view point of earthquake risk in a densely population city. The maximum strong horizontal ground motion for the 475-years return period 0.42g and for 2475-year return period 0.70g also the maximum strong vertical ground motion for 475-years return period 0.25g and 2475-years return period 0.44g was calculated using attenuation relationships. These acceleration levels are new, and are obtained to be about 25% higher than presented values in the Iranian building code.

Keywords: seismic zones, ground motion, return period, hazard analysis

Procedia PDF Downloads 97
683 On Exploring Search Heuristics for improving the efficiency in Web Information Extraction

Authors: Patricia Jiménez, Rafael Corchuelo

Abstract:

Nowadays the World Wide Web is the most popular source of information that relies on billions of on-line documents. Web mining is used to crawl through these documents, collect the information of interest and process it by applying data mining tools in order to use the gathered information in the best interest of a business, what enables companies to promote theirs. Unfortunately, it is not easy to extract the information a web site provides automatically when it lacks an API that allows to transform the user-friendly data provided in web documents into a structured format that is machine-readable. Rule-based information extractors are the tools intended to extract the information of interest automatically and offer it in a structured format that allow mining tools to process it. However, the performance of an information extractor strongly depends on the search heuristic employed since bad choices regarding how to learn a rule may easily result in loss of effectiveness and/or efficiency. Improving search heuristics regarding efficiency is of uttermost importance in the field of Web Information Extraction since typical datasets are very large. In this paper, we employ an information extractor based on a classical top-down algorithm that uses the so-called Information Gain heuristic introduced by Quinlan and Cameron-Jones. Unfortunately, the Information Gain relies on some well-known problems so we analyse an intuitive alternative, Termini, that is clearly more efficient; we also analyse other proposals in the literature and conclude that none of them outperforms the previous alternative.

Keywords: information extraction, search heuristics, semi-structured documents, web mining.

Procedia PDF Downloads 335
682 Optimal Hybrid Linear and Nonlinear Control for a Quadcopter Drone

Authors: Xinhuang Wu, Yousef Sardahi

Abstract:

A hybrid and optimal multi-loop control structure combining linear and nonlinear control algorithms are introduced in this paper to regulate the position of a quadcopter unmanned aerial vehicle (UAV) driven by four brushless DC motors. To this end, a nonlinear mathematical model of the UAV is derived and then linearized around one of its operating points. Using the nonlinear version of the model, a sliding mode control is used to derive the control laws of the motor thrust forces required to drive the UAV to a certain position. The linear model is used to design two controllers, XG-controller and YG-controller, responsible for calculating the required roll and pitch to maneuver the vehicle to the desired X and Y position. Three attitude controllers are designed to calculate the desired angular rates of rotors, assuming that the Euler angles are minimal. After that, a many-objective optimization problem involving 20 design parameters and ten objective functions is formulated and solved by HypE (Hypervolume estimation algorithm), one of the widely used many-objective optimization algorithms approaches. Both stability and performance constraints are imposed on the optimization problem. The optimization results in terms of Pareto sets and fronts are obtained and show that some of the design objectives are competing. That is, when one objective goes down, the other goes up. Also, Numerical simulations conducted on the nonlinear UAV model show that the proposed optimization method is quite effective.

Keywords: optimal control, many-objective optimization, sliding mode control, linear control, cascade controllers, UAV, drones

Procedia PDF Downloads 73
681 A Hybrid Traffic Model for Smoothing Traffic Near Merges

Authors: Shiri Elisheva Decktor, Sharon Hornstein

Abstract:

Highway merges and unmarked junctions are key components in any urban road network, which can act as bottlenecks and create traffic disruption. Inefficient highway merges may trigger traffic instabilities such as stop-and-go waves, pose safety conditions and lead to longer journey times. These phenomena occur spontaneously if the average vehicle density exceeds a certain critical value. This study focuses on modeling the traffic using a microscopic traffic flow model. A hybrid traffic model, which combines human-driven and controlled vehicles is assumed. The controlled vehicles obey different driving policies when approaching the merge, or in the vicinity of other vehicles. We developed a co-simulation model in SUMO (Simulation of Urban Mobility), in which the human-driven cars are modeled using the IDM model, and the controlled cars are modeled using a dedicated controller. The scenario chosen for this study is a closed track with one merge and one exit, which could be later implemented using a scaled infrastructure on our lab setup. This will enable us to benchmark the results of this study obtained in simulation, to comparable results in similar conditions in the lab. The metrics chosen for the comparison of the performance of our algorithm on the overall traffic conditions include the average speed, wait time near the merge, and throughput after the merge, measured under different travel demand conditions (low, medium, and heavy traffic).

Keywords: highway merges, traffic modeling, SUMO, driving policy

Procedia PDF Downloads 106