Search results for: Dijkstra’s algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3599

Search results for: Dijkstra’s algorithm

1019 A Predictive MOC Solver for Water Hammer Waves Distribution in Network

Authors: A. Bayle, F. Plouraboué

Abstract:

Water Distribution Network (WDN) still suffers from a lack of knowledge about fast pressure transient events prediction, although the latter may considerably impact their durability. Accidental or planned operating activities indeed give rise to complex pressure interactions and may drastically modified the local pressure value generating leaks and, in rare cases, pipe’s break. In this context, a numerical predictive analysis is conducted to prevent such event and optimize network management. A couple of Python/FORTRAN 90, home-made software, has been developed using Method Of Characteristic (MOC) solving for water-hammer equations. The solver is validated by direct comparison with theoretical and experimental measurement in simple configurations whilst afterward extended to network analysis. The algorithm's most costly steps are designed for parallel computation. A various set of boundary conditions and energetic losses models are considered for the network simulations. The results are analyzed in both real and frequencies domain and provide crucial information on the pressure distribution behavior within the network.

Keywords: energetic losses models, method of characteristic, numerical predictive analysis, water distribution network, water hammer

Procedia PDF Downloads 230
1018 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 409
1017 A Local Invariant Generalized Hough Transform Method for Integrated Circuit Visual Positioning

Authors: Wei Feilong

Abstract:

In this study, an local invariant generalized Houghtransform (LI-GHT) method is proposed for integrated circuit (IC) visual positioning. The original generalized Hough transform (GHT) is robust to external noise; however, it is not suitable for visual positioning of IC chips due to the four-dimensionality (4D) of parameter space which leads to the substantial storage requirement and high computational complexity. The proposed LI-GHT method can reduce the dimensionality of parameter space to 2D thanks to the rotational invariance of local invariant geometric feature and it can estimate the accuracy position and rotation angle of IC chips in real-time under noise and blur influence. The experiment results show that the proposed LI-GHT can estimate position and rotation angle of IC chips with high accuracy and fast speed. The proposed LI-GHT algorithm was implemented in IC visual positioning system of radio frequency identification (RFID) packaging equipment.

Keywords: Integrated Circuit Visual Positioning, Generalized Hough Transform, Local invariant Generalized Hough Transform, ICpacking equipment

Procedia PDF Downloads 263
1016 MPPT Control with (P&O) and (FLC) Algorithms of Solar Electric Generator

Authors: Dib Djalel, Mordjaoui Mourad

Abstract:

The current trend towards the exploitation of various renewable energy resources has become indispensable, so it is important to improve the efficiency and reliability of the GPV photovoltaic systems. Maximum Power Point Tracking (MPPT) plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions. This paper presents a new fuzzy logic control based MPPT algorithm for solar panel. The solar panel is modeled and analyzed in Matlab/Simulink. The Solar panel can produce maximum power at a particular operating point called Maximum Power Point(MPP). To produce maximum power and to get maximum efficiency, the entire photovoltaic panel must operate at this particular point. Maximum power point of PV panel keeps on changing with changing environmental conditions such as solar irradiance and cell temperature. Thus, to extract maximum available power from a PV module, MPPT algorithms are implemented and Perturb and Observe (P&O) MPPT and fuzzy logic control FLC, MPPT are developed and compared. Simulation results show the effectiveness of the fuzzy control technique to produce a more stable power.

Keywords: MPPT, photovoltaic panel, fuzzy logic control, modeling, solar power

Procedia PDF Downloads 481
1015 Modeling and Dynamics Analysis for Intelligent Skid-Steering Vehicle Based on Trucksim-Simulink

Authors: Yansong Zhang, Xueyuan Li, Junjie Zhou, Xufeng Yin, Shihua Yuan, Shuxian Liu

Abstract:

Aiming at the verification of control algorithms for skid-steering vehicles, a vehicle simulation model of 6×6 electric skid-steering unmanned vehicle was established based on Trucksim and Simulink. The original transmission and steering mechanism of Trucksim are removed, and the electric skid-steering model and a closed-loop controller for the vehicle speed and yaw rate are built in Simulink. The simulation results are compared with the ones got by theoretical formulas. The results show that the predicted tire mechanics and vehicle kinematics of Trucksim-Simulink simulation model are closed to the theoretical results. Therefore, it can be used as an effective approach to study the dynamic performance and control algorithm of skid-steering vehicle. In this paper, a method of motion control based on feed forward control is also designed. The simulation results show that the feed forward control strategy can make the vehicle follow the target yaw rate more quickly and accurately, which makes the vehicle have more maneuverability.

Keywords: skid-steering, Trucksim-Simulink, feedforward control, dynamics

Procedia PDF Downloads 322
1014 Influence of the Paint Coating Thickness in Digital Image Correlation Experiments

Authors: Jesús A. Pérez, Sam Coppieters, Dimitri Debruyne

Abstract:

In the past decade, the use of digital image correlation (DIC) techniques has increased significantly in the area of experimental mechanics, especially for materials behavior characterization. This non-contact tool enables full field displacement and strain measurements over a complete region of interest. The DIC algorithm requires a random contrast pattern on the surface of the specimen in order to perform properly. To create this pattern, the specimen is usually first coated using a white matt paint. Next, a black random speckle pattern is applied using any suitable method. If the applied paint coating is too thick, its top surface may not be able to exactly follow the deformation of the specimen, and consequently, the strain measurement might be underestimated. In the present article, a study of the influence of the paint thickness on the strain underestimation is performed for different strain levels. The results are then compared to typical paint coating thicknesses applied by experienced DIC users. A slight strain underestimation was observed for paint coatings thicker than about 30μm. On the other hand, this value was found to be uncommonly high compared to coating thicknesses applied by DIC users.

Keywords: digital image correlation, paint coating thickness, strain

Procedia PDF Downloads 512
1013 Analysis Of Non-uniform Characteristics Of Small Underwater Targets Based On Clustering

Authors: Tianyang Xu

Abstract:

Small underwater targets generally have a non-centrosymmetric geometry, and the acoustic scattering field of the target has spatial inhomogeneity under active sonar detection conditions. In view of the above problems, this paper takes the hemispherical cylindrical shell as the research object, and considers the angle continuity implied in the echo characteristics, and proposes a cluster-driven research method for the non-uniform characteristics of target echo angle. First, the target echo features are extracted, and feature vectors are constructed. Secondly, the t-SNE algorithm is used to improve the internal connection of the feature vector in the low-dimensional feature space and to construct the visual feature space. Finally, the implicit angular relationship between echo features is extracted under unsupervised condition by cluster analysis. The reconstruction results of the local geometric structure of the target corresponding to different categories show that the method can effectively divide the angle interval of the local structure of the target according to the natural acoustic scattering characteristics of the target.

Keywords: underwater target;, non-uniform characteristics;, cluster-driven method;, acoustic scattering characteristics

Procedia PDF Downloads 127
1012 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier

Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi

Abstract:

The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.

Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance

Procedia PDF Downloads 492
1011 Optimization of Reinforced Concrete Buildings According to the Algerian Seismic Code

Authors: Nesreddine Djafar Henni, Nassim Djedoui, Rachid Chebili

Abstract:

Recent decades have witnessed significant efforts being made to optimize different types of structures and components. The concept of cost optimization in reinforced concrete structures, which aims at minimizing financial resources while ensuring maximum building safety, comprises multiple materials, and the objective function for their optimal design is derived from the construction cost of the steel as well as concrete that significantly contribute to the overall weight of reinforced concrete (RC) structures. To achieve this objective, this work has been devoted to optimizing the structural design of 3D RC frame buildings which integrates, for the first time, the Algerian regulations. Three different test examples were investigated to assess the efficiency of our work in optimizing RC frame buildings. The hybrid GWOPSO algorithm is used, and 30000 generations are made. The cost of the building is reduced by iteration each time. Concrete and reinforcement bars are used in the building cost. As a result, the cost of a reinforced concrete structure is reduced by 30% compared with the initial design. This result means that the 3D cost-design optimization of the framed structure is successfully achieved.

Keywords: optimization, automation, API, Malab, RC structures

Procedia PDF Downloads 47
1010 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model

Authors: Abdellahi Cheikh

Abstract:

With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.

Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard

Procedia PDF Downloads 93
1009 Parameter Tuning of Complex Systems Modeled in Agent Based Modeling and Simulation

Authors: Rabia Korkmaz Tan, Şebnem Bora

Abstract:

The major problem encountered when modeling complex systems with agent-based modeling and simulation techniques is the existence of large parameter spaces. A complex system model cannot be expected to reflect the whole of the real system, but by specifying the most appropriate parameters, the actual system can be represented by the model under certain conditions. When the studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in agent based simulations, and these studies have focused on tuning parameters of a single model. In this study, an approach of parameter tuning is proposed by using metaheuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colonies (ABC), Firefly (FA) algorithms. With this hybrid structured study, the parameter tuning problems of the models in the different fields were solved. The new approach offered was tested in two different models, and its achievements in different problems were compared. The simulations and the results reveal that this proposed study is better than the existing parameter tuning studies.

Keywords: parameter tuning, agent based modeling and simulation, metaheuristic algorithms, complex systems

Procedia PDF Downloads 224
1008 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction

Authors: Raquel M. De sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques

Abstract:

Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of a higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses an artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of backpropagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this case iodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.

Keywords: artificial neural networks, biodiesel, iodine value, prediction

Procedia PDF Downloads 605
1007 Cash Flow Optimization on Synthetic CDOs

Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet

Abstract:

Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.

Keywords: synthetic collateralized debt obligation (CDO), credit default swap (CDS), cash flow optimization, probability of default, default correlation, strategies, simulation, simplex

Procedia PDF Downloads 271
1006 An Efficient Encryption Scheme Using DWT and Arnold Transforms

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption

Procedia PDF Downloads 482
1005 Progressive Multimedia Collection Structuring via Scene Linking

Authors: Aman Berhe, Camille Guinaudeau, Claude Barras

Abstract:

In order to facilitate information seeking in large collections of multimedia documents with long and progressive content (such as broadcast news or TV series), one can extract the semantic links that exist between semantically coherent parts of documents, i.e., scenes. The links can then create a coherent collection of scenes from which it is easier to perform content analysis, topic extraction, or information retrieval. In this paper, we focus on TV series structuring and propose two approaches for scene linking at different levels of granularity (episode and season): a fuzzy online clustering technique and a graph-based community detection algorithm. When evaluated on the two first seasons of the TV series Game of Thrones, we found that the fuzzy online clustering approach performed better compared to graph-based community detection at the episode level, while graph-based approaches show better performance at the season level.

Keywords: multimedia collection structuring, progressive content, scene linking, fuzzy clustering, community detection

Procedia PDF Downloads 98
1004 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: biometric characters, facial recognition, neural network, OpenCV

Procedia PDF Downloads 254
1003 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis

Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho

Abstract:

This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.

Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis

Procedia PDF Downloads 179
1002 Predictive Modelling Approach to Identify Spare Parts Inventory Obsolescence

Authors: Madhu Babu Cherukuri, Tamoghna Ghosh

Abstract:

Factory supply chain management spends billions of dollars every year to procure and manage equipment spare parts. Due to technology -and processes changes some of these spares become obsolete/dead inventory. Factories have huge dead inventory worth millions of dollars accumulating over time. This is due to lack of a scientific methodology to identify them and send the inventory back to the suppliers on a timely basis. The standard approach followed across industries to deal with this is: if a part is not used for a set pre-defined period of time it is declared dead. This leads to accumulation of dead parts over time and these parts cannot be sold back to the suppliers as it is too late as per contract agreement. Our main idea is the time period for identifying a part as dead cannot be a fixed pre-defined duration across all parts. Rather, it should depend on various properties of the part like historical consumption pattern, type of part, how many machines it is being used in, whether it- is a preventive maintenance part etc. We have designed a predictive algorithm which predicts part obsolescence well in advance with reasonable accuracy and which can help save millions.

Keywords: obsolete inventory, machine learning, big data, supply chain analytics, dead inventory

Procedia PDF Downloads 317
1001 Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications

Authors: Janusz Bedkowski, Grzegorz Kisala, Michal Wlasiuk, Piotr Pokorski

Abstract:

Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM.

Keywords: SLAM, ground truth, navigation, LiDAR, visual odometry, mapping

Procedia PDF Downloads 66
1000 Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array

Authors: Hao Cheng, Zhiwu Wang, Guozheng Yan, Pingping Jiang, Shijia Qin, Shuai Kuang

Abstract:

Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm.

Keywords: AFI, edge detection, adaptive threshold, morphological processing, OV6930, FPGA

Procedia PDF Downloads 228
999 Output-Feedback Control Design for a General Class of Systems Subject to Sampling and Uncertainties

Authors: Tomas Menard

Abstract:

The synthesis of output-feedback control law has been investigated by many researchers since the last century. While many results exist for the case of Linear Time Invariant systems whose measurements are continuously available, nowadays, control laws are usually implemented on micro-controller, then the measurements are discrete-time by nature. This fact has to be taken into account explicitly in order to obtain a satisfactory behavior of the closed-loop system. One considers here a general class of systems corresponding to an observability normal form and which is subject to uncertainties in the dynamics and sampling of the output. Indeed, in practice, the modeling of the system is never perfect, this results in unknown uncertainties in the dynamics of the model. We propose here an output feedback algorithm which is based on a linear state feedback and a continuous-discrete time observer. The main feature of the proposed control law is that only discrete-time measurements of the output are needed. Furthermore, it is formally proven that the state of the closed loop system exponentially converges toward the origin despite the unknown uncertainties. Finally, the performances of this control scheme are illustrated with simulations.

Keywords: dynamical systems, output feedback control law, sampling, uncertain systems

Procedia PDF Downloads 283
998 A Comparison of Methods for Neural Network Aggregation

Authors: John Pomerat, Aviv Segev

Abstract:

Recently, deep learning has had many theoretical breakthroughs. For deep learning to be successful in the industry, however, there need to be practical algorithms capable of handling many real-world hiccups preventing the immediate application of a learning algorithm. Although AI promises to revolutionize the healthcare industry, getting access to patient data in order to train learning algorithms has not been easy. One proposed solution to this is data- sharing. In this paper, we propose an alternative protocol, based on multi-party computation, to train deep learning models while maintaining both the privacy and security of training data. We examine three methods of training neural networks in this way: Transfer learning, average ensemble learning, and series network learning. We compare these methods to the equivalent model obtained through data-sharing across two different experiments. Additionally, we address the security concerns of this protocol. While the motivating example is healthcare, our findings regarding multi-party computation of neural network training are purely theoretical and have use-cases outside the domain of healthcare.

Keywords: neural network aggregation, multi-party computation, transfer learning, average ensemble learning

Procedia PDF Downloads 161
997 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform

Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez

Abstract:

Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.

Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments

Procedia PDF Downloads 263
996 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is the coagulation using alum and poly aluminum chloride (PACL), and the value of usage per day is a hundred thousand baht. Therefore, determining the dosage of alum and PACL are the most important factors to be prescribed. Water production is economical and valuable. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for prediction chemical dose used to coagulation such as alum and PACL, which input data consists of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of Bangkhen water treatment plant (BKWTP) Metropolitan Waterworks Authority. The data collected from 1 January 2019 to 31 December 2019 cover changing seasons of Thailand. The input data of ANN is divided into three groups training set, test set, and validation set, which the best model performance with a coefficient of determination and mean absolute error of alum are 0.73, 3.18, and PACL is 0.59, 3.21 respectively.

Keywords: soft jar test, jar test, water treatment plant process, artificial neural network

Procedia PDF Downloads 163
995 Forecasting the Fluctuation of Currency Exchange Rate Using Random Forest

Authors: Lule Basha, Eralda Gjika

Abstract:

The exchange rate is one of the most important economic variables, especially for a small, open economy such as Albania. Its effect is noticeable in one country's competitiveness, trade and current account, inflation, wages, domestic economic activity, and bank stability. This study investigates the fluctuation of Albania’s exchange rates using monthly average foreign currency, Euro (Eur) to Albanian Lek (ALL) exchange rate with a time span from January 2008 to June 2021, and the macroeconomic factors that have a significant effect on the exchange rate. Initially, the Random Forest Regression algorithm is constructed to understand the impact of economic variables on the behavior of monthly average foreign currencies exchange rates. Then the forecast of macro-economic indicators for 12 months was performed using time series models. The predicted values received are placed in the random forest model in order to obtain the average monthly forecast of the Euro to Albanian Lek (ALL) exchange rate for the period July 2021 to June 2022.

Keywords: exchange rate, random forest, time series, machine learning, prediction

Procedia PDF Downloads 100
994 An Efficient Stud Krill Herd Framework for Solving Non-Convex Economic Dispatch Problem

Authors: Bachir Bentouati, Lakhdar Chaib, Saliha Chettih, Gai-Ge Wang

Abstract:

The problem of economic dispatch (ED) is the basic problem of power framework, its main goal is to find the most favorable generation dispatch to generate each unit, reduce the whole power generation cost, and meet all system limitations. A heuristic algorithm, recently developed called Stud Krill Herd (SKH), has been employed in this paper to treat non-convex ED problems. The proposed KH has been modified using Stud selection and crossover (SSC) operator, to enhance the solution quality and avoid local optima. We are demonstrated SKH effects in two case study systems composed of 13-unit and 40-unit test systems to verify its performance and applicability in solving the ED problems. In the above systems, SKH can successfully obtain the best fuel generator and distribute the load requirements for the online generators. The results showed that the use of the proposed SKH method could reduce the total cost of generation and optimize the fulfillment of the load requirements.

Keywords: stud krill herd, economic dispatch, crossover, stud selection, valve-point effect

Procedia PDF Downloads 197
993 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan

Abstract:

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Keywords: pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction

Procedia PDF Downloads 260
992 Improvement of Cross Range Resolution in Through Wall Radar Imaging Using Bilateral Backprojection

Authors: Rashmi Yadawad, Disha Narayanan, Ravi Gautam

Abstract:

Through Wall Radar Imaging is gaining increasing importance now a days in the field of Defense and one of the most important criteria that forms the basis for the image quality obtained is the Cross-Range resolution of the image. In this research paper, the Bilateral Back projection algorithm has been implemented for Through Wall Radar Imaging. The sole purpose is to enhance the resolution in the cross range direction of the obtained Back projection image. Synthetic Data is generated for two targets which are placed at various locations in a room of dimensions 8 m by 6m. Two algorithms namely, simple back projection and Bilateral Back projection have been implemented, images are obtained and the obtained images are compared. Numerical simulations have been coded in MATLAB and experimental results of the two algorithms have been shown. Based on the comparison between the two images, it can be clearly seen that the ringing effect and chess board effect have been heavily reduced in the bilaterally back projected image and hence promising results are obtained giving a relatively sharper image with relatively well defined edges.

Keywords: through wall radar imaging, bilateral back projection, cross range resolution, synthetic data

Procedia PDF Downloads 344
991 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization

Authors: Xiongxiong You, Zhanwen Niu

Abstract:

Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.

Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms

Procedia PDF Downloads 137
990 Impact of Air Pressure and Outlet Temperature on Physicochemical and Functional Properties of Spray-dried Skim Milk Powder

Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit

Abstract:

Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder, to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed and the use of genetic algorithm will allow the optimization of powder functionalities.

Keywords: dairy powders, spray-drying, powders functionalities, design of experiment

Procedia PDF Downloads 90