Search results for: optimisation algorithms
1612 Remote Sensing Approach to Predict the Impacts of Land Use/Land Cover Change on Urban Thermal Comfort Using Machine Learning Algorithms
Authors: Ahmad E. Aldousaria, Abdulla Al Kafy
Abstract:
Urbanization is an incessant process that involves the transformation of land use/land cover (LULC), resulting in a reduction of cool land covers and thermal comfort zones (TCZs). This study explores the directional shrinkage of TCZs in Kuwait using Landsat satellite data from 1991 – 2021 to predict the future LULC and TCZ distribution for 2026 and 2031 using cellular automata (CA) and artificial neural network (ANN) algorithms. Analysis revealed a rapid urban expansion (40 %) in SE, NE, and NW directions and TCZ shrinkage in N – NW and SW directions with 25 % of the very uncomfortable area. The predicted result showed an urban area increase from 44 % in 2021 to 47 % and 52 % in 2026 and 2031, respectively, where uncomfortable zones were found to be concentrated around urban areas and bare lands in N – NE and N – NW directions. This study proposes an effective and sustainable framework to control TCZ shrinkage, including zero soil policies, planned landscape design, manmade water bodies, and rooftop gardens. This study will help urban planners and policymakers to make Kuwait an eco–friendly, functional, and sustainable country.Keywords: land cover change, thermal environment, green cover loss, machine learning, remote sensing
Procedia PDF Downloads 2271611 The Asymmetric Proximal Support Vector Machine Based on Multitask Learning for Classification
Authors: Qing Wu, Fei-Yan Li, Heng-Chang Zhang
Abstract:
Multitask learning support vector machines (SVMs) have recently attracted increasing research attention. Given several related tasks, the single-task learning methods trains each task separately and ignore the inner cross-relationship among tasks. However, multitask learning can capture the correlation information among tasks and achieve better performance by training all tasks simultaneously. In addition, the asymmetric squared loss function can better improve the generalization ability of the models on the most asymmetric distributed data. In this paper, we first make two assumptions on the relatedness among tasks and propose two multitask learning proximal support vector machine algorithms, named MTL-a-PSVM and EMTL-a-PSVM, respectively. MTL-a-PSVM seeks a trade-off between the maximum expectile distance for each task model and the closeness of each task model to the general model. As an extension of the MTL-a-PSVM, EMTL-a-PSVM can select appropriate kernel functions for shared information and private information. Besides, two corresponding special cases named MTL-PSVM and EMTLPSVM are proposed by analyzing the asymmetric squared loss function, which can be easily implemented by solving linear systems. Experimental analysis of three classification datasets demonstrates the effectiveness and superiority of our proposed multitask learning algorithms.Keywords: multitask learning, asymmetric squared loss, EMTL-a-PSVM, classification
Procedia PDF Downloads 1341610 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 701609 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study
Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple
Abstract:
There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection
Procedia PDF Downloads 1581608 Quantitative Evaluation of Endogenous Reference Genes for ddPCR under Salt Stress Using a Moderate Halophile
Authors: Qinghua Xing, Noha M. Mesbah, Haisheng Wang, Jun Li, Baisuo Zhao
Abstract:
Droplet digital PCR (ddPCR) is being increasingly adopted for gene detection and quantification because of its higher sensitivity and specificity. According to previous observations and our lab data, it is essential to use endogenous reference genes (RGs) when investigating gene expression at the mRNA level under salt stress. This study aimed to select and validate suitable RGs for gene expression under salt stress using ddPCR. Six candidate RGs were selected based on the tandem mass tag (TMT)-labeled quantitative proteomics of Alkalicoccus halolimnae at four salinities. The expression stability of these candidate genes was evaluated using statistical algorithms (geNorm, NormFinder, BestKeeper and RefFinder). There was a small fluctuation in cycle threshold (Ct) value and copy number of the pdp gene. Its expression stability was ranked in the vanguard of all algorithms, and was the most suitable RG for quantification of expression by both qPCR and ddPCR of A. halolimnae under salt stress. Single RG pdp and RG combinations were used to normalize the expression of ectA, ectB, ectC, and ectD under four salinities. The present study constitutes the first systematic analysis of endogenous RG selection for halophiles responding to salt stress. This work provides a valuable theory and an approach reference of internal control identification for ddPCR-based stress response models.Keywords: endogenous reference gene, salt stress, ddPCR, RT-qPCR, Alkalicoccus halolimnae
Procedia PDF Downloads 1041607 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming
Authors: William Chung
Abstract:
This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems
Procedia PDF Downloads 1661606 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional
Procedia PDF Downloads 2291605 Evaluation of Beam Structure Using Non-Destructive Vibration-Based Damage Detection Method
Authors: Bashir Ahmad Aasim, Abdul Khaliq Karimi, Jun Tomiyama
Abstract:
Material aging is one of the vital issues among all the civil, mechanical, and aerospace engineering societies. Sustenance and reliability of concrete, which is the widely used material in the world, is the focal point in civil engineering societies. For few decades, researchers have been able to present some form algorithms that could lead to evaluate a structure globally rather than locally without harming its serviceability and traffic interference. The algorithms could help presenting different methods for evaluating structures non-destructively. In this paper, a non-destructive vibration-based damage detection method is adopted to evaluate two concrete beams, one being in a healthy state while the second one contains a crack on its bottom vicinity. The study discusses that damage in a structure affects modal parameters (natural frequency, mode shape, and damping ratio), which are the function of physical properties (mass, stiffness, and damping). The assessment is carried out to acquire the natural frequency of the sound beam. Next, the vibration response is recorded from the cracked beam. Eventually, both results are compared to know the variation in the natural frequencies of both beams. The study concludes that damage can be detected using vibration characteristics of a structural member considering the decline occurred in the natural frequency of the cracked beam.Keywords: concrete beam, natural frequency, non-destructive testing, vibration characteristics
Procedia PDF Downloads 1121604 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1831603 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.Keywords: archaeology drawings, digital heritage, image generation, deep learning
Procedia PDF Downloads 591602 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 2231601 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids
Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo
Abstract:
Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium
Procedia PDF Downloads 1901600 Fraud Detection in Credit Cards with Machine Learning
Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf
Abstract:
Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine
Procedia PDF Downloads 1481599 To Ensure Maximum Voter Privacy in E-Voting Using Blockchain, Convolutional Neural Network, and Quantum Key Distribution
Authors: Bhaumik Tyagi, Mandeep Kaur, Kanika Singla
Abstract:
The advancement of blockchain has facilitated scholars to remodel e-voting systems for future generations. Server-side attacks like SQL injection attacks and DOS attacks are the most common attacks nowadays, where malicious codes are injected into the system through user input fields by illicit users, which leads to data leakage in the worst scenarios. Besides, quantum attacks are also there which manipulate the transactional data. In order to deal with all the above-mentioned attacks, integration of blockchain, convolutional neural network (CNN), and Quantum Key Distribution is done in this very research. The utilization of blockchain technology in e-voting applications is not a novel concept. But privacy and security issues are still there in a public and private blockchains. To solve this, the use of a hybrid blockchain is done in this research. This research proposed cryptographic signatures and blockchain algorithms to validate the origin and integrity of the votes. The convolutional neural network (CNN), a normalized version of the multilayer perceptron, is also applied in the system to analyze visual descriptions upon registration in a direction to enhance the privacy of voters and the e-voting system. Quantum Key Distribution is being implemented in order to secure a blockchain-based e-voting system from quantum attacks using quantum algorithms. Implementation of e-voting blockchain D-app and providing a proposed solution for the privacy of voters in e-voting using Blockchain, CNN, and Quantum Key Distribution is done.Keywords: hybrid blockchain, secure e-voting system, convolutional neural networks, quantum key distribution, one-time pad
Procedia PDF Downloads 941598 Mathematical Modelling of a Low Tip Speed Ratio Wind Turbine for System Design Evaluation
Authors: Amir Jalalian-Khakshour, T. N. Croft
Abstract:
Vertical Axis Wind Turbine (VAWT) systems are becoming increasingly popular as they have a number of advantages over traditional wind turbines. The advantages are reliability, ease of transportation and manufacturing. These attributes could make these technologies useful in developing economies. The performance characteristics of a VAWT are different from a horizontal axis wind turbine, which can be attributed to the low tip speed ratio operation. To unlock the potential of these VAWT systems, the operational behaviour in a number of system topologies and environmental conditions needs to be understood. In this study, a non-linear dynamic simulation method was developed in Matlab and validated against in field data of a large scale, 8-meter rotor diameter prototype. This simulation method has been utilised to determine the performance characteristics of a number of control methods and system topologies. The motivation for this research was to develop a simulation method which accurately captures the operating behaviour and is computationally inexpensive. The model was used to evaluate the performance through parametric studies and optimisation techniques. The study gave useful insights into the applications and energy generation potential of this technology.Keywords: power generation, renewable energy, rotordynamics, wind energy
Procedia PDF Downloads 3041597 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms
Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao
Abstract:
Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50
Procedia PDF Downloads 1401596 On the End-of-Life Inventory Problem
Authors: Hans Frenk, Sonya Javadi, Semih Onur Sezer
Abstract:
We consider the so-called end of life inventory problem for the supplier of a product in its final phase of the service life cycle. This phase starts when the production of the items stops and continues until the warranty of the last sold item expires. At the beginning of this phase, the supplier places a final order for spare parts to serve customers coming with defective items. At any time during the final phase, the supplier may also decide to switch to an alternative and more cost-effective policy. This alternative policy may be in the form of replacing a defective item with a substitutable product or offering discounts / rebates on new generation products. In this setup, the objective is to find a final order quantity and also a switching time which will minimize the total expected discounted cost. We study this problem under a general cost structure in a continuous-time framework where arrivals of defective items are given by a non-homogeneous Poisson process. We consider four formulations which differ by the nature of the switching time. These formulations are studied in detail and properties of the objective function are derived in each case. Using these properties, we provide exact algorithms for efficient numerical implementations. Numerical examples are provided illustrating the application of these algorithms. In these examples, we also compare the costs associated with these different formulations.Keywords: End-of-life inventory control, martingales, optimization, service parts
Procedia PDF Downloads 3351595 Symbolic Computation via Grobner Basis
Authors: Haohao Wang
Abstract:
The purpose of this paper is to find elimination ideals via Grobner basis. We first introduce the concept of Grobner bases, and then, we provide computational algorithms to applications for curves and surfaces.Keywords: curves, surfaces, Grobner basis, elimination
Procedia PDF Downloads 2991594 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 1281593 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 611592 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System
Authors: Rafal Michalski, Jakub Zygadlo
Abstract:
We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.Keywords: atomic matters, crystal electric field (CEF) spin-orbit coupling, localized states, electron subshell, fine electronic structure
Procedia PDF Downloads 3191591 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks
Authors: Shidrokh Goudarzi, Wan Haslina Hassan
Abstract:
Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms
Procedia PDF Downloads 3931590 Optimal Hybrid Linear and Nonlinear Control for a Quadcopter Drone
Authors: Xinhuang Wu, Yousef Sardahi
Abstract:
A hybrid and optimal multi-loop control structure combining linear and nonlinear control algorithms are introduced in this paper to regulate the position of a quadcopter unmanned aerial vehicle (UAV) driven by four brushless DC motors. To this end, a nonlinear mathematical model of the UAV is derived and then linearized around one of its operating points. Using the nonlinear version of the model, a sliding mode control is used to derive the control laws of the motor thrust forces required to drive the UAV to a certain position. The linear model is used to design two controllers, XG-controller and YG-controller, responsible for calculating the required roll and pitch to maneuver the vehicle to the desired X and Y position. Three attitude controllers are designed to calculate the desired angular rates of rotors, assuming that the Euler angles are minimal. After that, a many-objective optimization problem involving 20 design parameters and ten objective functions is formulated and solved by HypE (Hypervolume estimation algorithm), one of the widely used many-objective optimization algorithms approaches. Both stability and performance constraints are imposed on the optimization problem. The optimization results in terms of Pareto sets and fronts are obtained and show that some of the design objectives are competing. That is, when one objective goes down, the other goes up. Also, Numerical simulations conducted on the nonlinear UAV model show that the proposed optimization method is quite effective.Keywords: optimal control, many-objective optimization, sliding mode control, linear control, cascade controllers, UAV, drones
Procedia PDF Downloads 731589 Printing Imperfections: Development of Buckling Patterns to Improve Strength of 3D Printed Steel Plated Elements
Authors: Ben Chater, Jingbang Pan, Mark Evernden, Jie Wang
Abstract:
Traditional structural steel manufacturing routes normally produce prismatic members with flat plate elements. In these members, plate instability in the lowest buckling mode often dominates failure. It is proposed in the current study to use a new technology of metal 3D printing to print steel-plated elements with predefined imperfection patterns that can lead to higher modes of failure with increased buckling resistances. To this end, a numerical modeling program is carried out to explore various combinations of predefined buckling waves with different amplitudes in stainless steel square hollow section stub columns. Their stiffness, strength, and material consumption against the traditional structural steel members with the same nominal dimensions are assessed. It is found that depending on the slenderness of the plate elements; it is possible for an ‘imperfect’ steel member to achieve up to a 30% increase in strength with just a 3% increase in the material consumption. The obtained results shed some light on the significant potential of the new metal 3D printing technology in achieving unprecedented material efficiency and economical design in the future steel construction industry.Keywords: 3D printing, additive manufacturing, buckling resistance, steel plate buckling, structural optimisation
Procedia PDF Downloads 1441588 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 861587 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning
Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag
Abstract:
The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling
Procedia PDF Downloads 911586 Improvements in Double Q-Learning for Anomalous Radiation Source Searching
Authors: Bo-Bin Xiaoa, Chia-Yi Liua
Abstract:
In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.Keywords: double Q learning, dueling network, NoisyNet, source searching
Procedia PDF Downloads 1131585 Lockit: A Logic Locking Automation Software
Authors: Nemanja Kajtez, Yue Zhan, Basel Halak
Abstract:
The significant rise in the cost of manufacturing of nanoscale integrated circuits (IC) has led the majority of IC design companies to outsource the fabrication of their products to other companies, often located in different countries. This multinational nature of the hardware supply chain has led to a host of security threats, including IP piracy, IC overproduction, and Trojan insertion. To combat that, researchers have proposed logic locking techniques to protect the intellectual properties of the design and increase the difficulty of malicious modification of its functionality. However, the adoption of logic locking approaches is rather slow due to the lack of the integration with IC production process and the lack of efficacy of existing algorithms. This work automates the logic locking process by developing software using Python that performs the locking on a gate-level netlist and can be integrated with the existing digital synthesis tools. Analysis of the latest logic locking algorithms has demonstrated that the SFLL-HD algorithm is one of the most secure and versatile in trading-off levels of protection against different types of attacks and was thus selected for implementation. The presented tool can also be expanded to incorporate the latest locking mechanisms to keep up with the fast-paced development in this field. The paper also presents a case study to demonstrate the functionality of the tool and how it could be used to explore the design space and compare different locking solutions. The source code of this tool is available freely from (https://www.researchgate.net/publication/353195333_Source_Code_for_The_Lockit_Tool).Keywords: design automation, hardware security, IP piracy, logic locking
Procedia PDF Downloads 1831584 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 1881583 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams
Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina
Abstract:
It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.Keywords: algorithm, radiology, tuberculosis, x-rays exam
Procedia PDF Downloads 419