Search results for: watershed algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3798

Search results for: watershed algorithm

1668 Secret Security Smart Lock Using Artificial Intelligence Hybrid Algorithm

Authors: Vahid Bayrami Rad

Abstract:

Ever since humans developed a collective way of life to the development of urbanization, the concern of security has always been considered one of the most important challenges of life. To protect property, locks have always been a practical tool. With the advancement of technology, the form of locks has changed from mechanical to electric. One of the most widely used fields of using artificial intelligence is its application in the technology of surveillance security systems. Currently, the technologies used in smart anti-theft door handles are one of the most potential fields for using artificial intelligence. Artificial intelligence has the possibility to learn, calculate, interpret and process by analyzing data with the help of algorithms and mathematical models and make smart decisions. We will use Arduino board to process data.

Keywords: arduino board, artificial intelligence, image processing, solenoid lock

Procedia PDF Downloads 69
1667 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System

Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes

Abstract:

The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.

Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models

Procedia PDF Downloads 81
1666 Vision-Based Hand Segmentation Techniques for Human-Computer Interaction

Authors: M. Jebali, M. Jemni

Abstract:

This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image.

Keywords: HCI, sign language recognition, object tracking, hand segmentation

Procedia PDF Downloads 412
1665 Application of Method of Symmetries at a Calculation and Planning of Circular Plate with Variable Thickness

Authors: Kirill Trapezon, Alexandr Trapezon

Abstract:

A problem is formulated for the natural oscillations of a circular plate of linearly variable thickness on the basis of the symmetry method. The equations of natural frequencies and forms for a plate are obtained, providing that it is rigidly fixed along the inner contour. The first three eigenfrequencies are calculated, and the eigenmodes of the oscillations of the acoustic element are constructed. An algorithm for applying the symmetry method and the factorization method for solving problems in the theory of oscillations for plates of variable thickness is shown. The effectiveness of the approach is demonstrated on the basis of comparison of known results and those obtained in the article. It is shown that the results are more accurate and reliable.

Keywords: vibrations, plate, method of symmetries, differential equation, factorization, approximation

Procedia PDF Downloads 263
1664 Accelerating Side Channel Analysis with Distributed and Parallelized Processing

Authors: Kyunghee Oh, Dooho Choi

Abstract:

Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.

Keywords: DPA, distributed computing, parallelized processing, side channel analysis

Procedia PDF Downloads 428
1663 Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models

Authors: Maria C. Mariani, Md Al Masum Bhuiyan, Osei K. Tweneboah, Hector G. Huizar

Abstract:

This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.

Keywords: Augmented Dickey Fuller Test, geophysical time series, maximum likelihood estimation, stochastic volatility model

Procedia PDF Downloads 315
1662 Comparison Analysis of Multi-Channel Echo Cancellation Using Adaptive Filters

Authors: Sahar Mobeen, Anam Rafique, Irum Baig

Abstract:

Acoustic echo cancellation in multichannel is a system identification application. In real time environment, signal changes very rapidly which required adaptive algorithms such as Least Mean Square (LMS), Leaky Least Mean Square (LLMS), Normalized Least Mean square (NLMS) and average (AFA) having high convergence rate and stable. LMS and NLMS are widely used adaptive algorithm due to less computational complexity and AFA used of its high convergence rate. This research is based on comparison of acoustic echo (generated in a room) cancellation thorough LMS, LLMS, NLMS, AFA and newly proposed average normalized leaky least mean square (ANLLMS) adaptive filters.

Keywords: LMS, LLMS, NLMS, AFA, ANLLMS

Procedia PDF Downloads 566
1661 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 267
1660 On Musical Information Geometry with Applications to Sonified Image Analysis

Authors: Shannon Steinmetz, Ellen Gethner

Abstract:

In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content.

Keywords: sonification, musical information geometry, image, content extraction, automated quantification, audio segmentation, pattern recognition

Procedia PDF Downloads 238
1659 Parallel Computing: Offloading Matrix Multiplication to GPU

Authors: Bharath R., Tharun Sai N., Bhuvan G.

Abstract:

This project focuses on developing a Parallel Computing method aimed at optimizing matrix multiplication through GPU acceleration. Addressing algorithmic challenges, GPU programming intricacies, and integration issues, the project aims to enhance efficiency and scalability. The methodology involves algorithm design, GPU programming, and optimization techniques. Future plans include advanced optimizations, extended functionality, and integration with high-level frameworks. User engagement is emphasized through user-friendly interfaces, open- source collaboration, and continuous refinement based on feedback. The project's impact extends to significantly improving matrix multiplication performance in scientific computing and machine learning applications.

Keywords: matrix multiplication, parallel processing, cuda, performance boost, neural networks

Procedia PDF Downloads 59
1658 Congestion Control in Mobile Network by Prioritizing Handoff Calls

Authors: O. A. Lawal, O. A Ojesanmi

Abstract:

The demand for wireless cellular services continues to increase while the radio resources remain limited. Thus, network operators have to continuously manage the scarce radio resources in order to have an improved quality of service for mobile users. This paper proposes how to handle the problem of congestion in the mobile network by prioritizing handoff call, using the guard channel allocation scheme. The research uses specific threshold value for the time of allocation of the channel in the algorithm. The scheme would be simulated by generating various data for different traffics in the network as it would be in the real life. The result would be used to determine the probability of handoff call dropping and the probability of the new call blocking as a way of measuring the network performance.

Keywords: call block, channel, handoff, mobile cellular network

Procedia PDF Downloads 394
1657 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 71
1656 Practical Problems as Tools for the Development of Secondary School Students’ Motivation to Learn Mathematics

Authors: M. Rodionov, Z. Dedovets

Abstract:

This article discusses plausible reasoning use for solution to practical problems. Such reasoning is the major driver of motivation and implementation of mathematical, scientific and educational research activity. A general, practical problem solving algorithm is presented which includes an analysis of specific problem content to build, solve and interpret the underlying mathematical model. The author explores the role of practical problems such as the stimulation of students' interest, the development of their world outlook and their orientation in the modern world at the different stages of learning mathematics in secondary school. Particular attention is paid to the characteristics of those problems which were systematized and presented in the conclusions.

Keywords: mathematics, motivation, secondary school, student, practical problem

Procedia PDF Downloads 299
1655 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data

Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju

Abstract:

Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.

Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding

Procedia PDF Downloads 412
1654 Improved Particle Swarm Optimization with Cellular Automata and Fuzzy Cellular Automata

Authors: Ramin Javadzadeh

Abstract:

The particle swarm optimization are Meta heuristic optimization method, which are used for clustering and pattern recognition applications are abundantly. These algorithms in multimodal optimization problems are more efficient than genetic algorithms. A major drawback in these algorithms is their slow convergence to global optimum and their weak stability can be considered in various running of these algorithms. In this paper, improved Particle swarm optimization is introduced for the first time to overcome its problems. The fuzzy cellular automata is used for improving the algorithm efficiently. The credibility of the proposed approach is evaluated by simulations, and it is shown that the proposed approach achieves better results can be achieved compared to the Particle swarm optimization algorithms.

Keywords: cellular automata, cellular learning automata, local search, optimization, particle swarm optimization

Procedia PDF Downloads 607
1653 Detect QOS Attacks Using Machine Learning Algorithm

Authors: Christodoulou Christos, Politis Anastasios

Abstract:

A large majority of users favoured to wireless LAN connection since it was so simple to use. A wireless network can be the target of numerous attacks. Class hijacking is a well-known attack that is fairly simple to execute and has significant repercussions on users. The statistical flow analysis based on machine learning (ML) techniques is a promising categorization methodology. In a given dataset, which in the context of this paper is a collection of components representing frames belonging to various flows, machine learning (ML) can offer a technique for identifying and characterizing structural patterns. It is possible to classify individual packets using these patterns. It is possible to identify fraudulent conduct, such as class hijacking, and take necessary action as a result. In this study, we explore a way to use machine learning approaches to thwart this attack.

Keywords: wireless lan, quality of service, machine learning, class hijacking, EDCA remapping

Procedia PDF Downloads 61
1652 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 484
1651 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 240
1650 Self-Tuning Robot Control Based on Subspace Identification

Authors: Mathias Marquardt, Peter Dünow, Sandra Baßler

Abstract:

The paper describes the use of subspace based identification methods for auto tuning of a state space control system. The plant is an unstable but self balancing transport robot. Because of the unstable character of the process it has to be identified from closed loop input-output data. Based on the identified model a state space controller combined with an observer is calculated. The subspace identification algorithm and the controller design procedure is combined to a auto tuning method. The capability of the approach was verified in a simulation experiments under different process conditions.

Keywords: auto tuning, balanced robot, closed loop identification, subspace identification

Procedia PDF Downloads 380
1649 Natural Convection of a Nanofluid in a Conical Container

Authors: Brahim Mahfoud, Ali Bendjaghlouli

Abstract:

Natural convection is simulated in a truncated cone filled with nanofluid. Inclined and top walls have constant temperature where the heat source is located on the bottom wall of the conical container which is thermally insulated. A finite volume approach is used to solve the governing equations using the SIMPLE algorithm for different parameters such as Rayleigh number, inclination angle of inclined walls of the enclosure and heat source length. The results showed an enhancement in cooling system by using a nanofluid, when conduction regime is assisted. The inclination angle of inclined sidewall and heat source length affect the heat transfer rate and the maximum temperature.

Keywords: heat source, truncated cone, nanofluid, natural convection

Procedia PDF Downloads 368
1648 Application of the Discrete Rationalized Haar Transform to Distributed Parameter System

Authors: Joon-Hoon Park

Abstract:

In this paper the rationalized Haar transform is applied for distributed parameter system identification and estimation. A distributed parameter system is a dynamical and mathematical model described by a partial differential equation. And system identification concerns the problem of determining mathematical models from observed data. The Haar function has some disadvantages of calculation because it contains irrational numbers, for these reasons the rationalized Haar function that has only rational numbers. The algorithm adopted in this paper is based on the transform and operational matrix of the rationalized Haar function. This approach provides more convenient and efficient computational results.

Keywords: distributed parameter system, rationalized Haar transform, operational matrix, system identification

Procedia PDF Downloads 509
1647 Strong Convergence of an Iterative Sequence in Real Banach Spaces with Kadec Klee Property

Authors: Umar Yusuf Batsari

Abstract:

Let E be a uniformly smooth and uniformly convex real Banach space and C be a nonempty, closed and convex subset of E. Let $V= \{S_i : C\to C, ~i=1, 2, 3\cdots N\}$ be a convex set of relatively nonexpansive mappings containing identity. In this paper, an iterative sequence obtained from CQ algorithm was shown to have strongly converge to a point $\hat{x}$ which is a common fixed point of relatively nonexpansive mappings in V and also solve the system of equilibrium problems in E. The result improve some existing results in the literature.

Keywords: relatively nonexpansive mappings, strong convergence, equilibrium problems, uniformly smooth space, uniformly convex space, convex set, kadec klee property

Procedia PDF Downloads 424
1646 Limit State of Heterogeneous Smart Structures under Unknown Cyclic Loading

Authors: M. Chen, S-Q. Zhang, X. Wang, D. Tate

Abstract:

This paper presents a numerical solution, namely limit and shakedown analysis, to predict the safety state of smart structures made of heterogeneous materials under unknown cyclic loadings, for instance, the flexure hinge in the micro-positioning stage driven by piezoelectric actuator. In combination of homogenization theory and finite-element method (FEM), the safety evaluation problem is converted to a large-scale nonlinear optimization programming for an acceptable bounded loading as the design reference. Furthermore, a general numerical scheme integrated with the FEM and interior-point-algorithm based optimization tool is developed, which makes the practical application possible.

Keywords: limit state, shakedown analysis, homogenization, heterogeneous structure

Procedia PDF Downloads 341
1645 Real-Time Implementation of Self-Tuning Fuzzy-PID Controller for First Order Plus Dead Time System Base on Microcontroller STM32

Authors: Maitree Thamma, Witchupong Wiboonjaroen, Thanat Suknuan, Karan Homchat

Abstract:

First order plus dead time (FOPDT) is a high dynamic system. Therefore, the controller must be intelligent. This paper presents the development and implementation of self-tuning Fuzzy-PID controller for controlling the FOPDT system. The water level process used represented FOPDT system and the mathematical model of the system was approximated by using System Identification toolbox in Matlab. The control programming and Fuzzy-PID algorithm used Matlab/Simulink and run on Microcontroller STM32.

Keywords: real-time control, self-tuning fuzzy-PID, FOPDT system, the water lever process

Procedia PDF Downloads 293
1644 GIS-Based Topographical Network for Minimum “Exertion” Routing

Authors: Katherine Carl Payne, Moshe Dror

Abstract:

The problem of minimum cost routing has been extensively explored in a variety of contexts. While there is a prevalence of routing applications based on least distance, time, and related attributes, exertion-based routing has remained relatively unexplored. In particular, the network structures traditionally used to construct minimum cost paths are not suited to representing exertion or finding paths of least exertion based on road gradient. In this paper, we introduce a topographical network or “topograph” that enables minimum cost routing based on the exertion metric on each arc in a given road network as it is related to changes in road gradient. We describe an algorithm for topograph construction and present the implementation of the topograph on a road network of the state of California with ~22 million nodes.

Keywords: topograph, RPE, routing, GIS

Procedia PDF Downloads 546
1643 Finding Viable Pollution Routes in an Urban Network under a Predefined Cost

Authors: Dimitra Alexiou, Stefanos Katsavounis, Ria Kalfakakou

Abstract:

In an urban area the determination of transportation routes should be planned so as to minimize the provoked pollution taking into account the cost of such routes. In the sequel these routes are cited as pollution routes. The transportation network is expressed by a weighted graph G= (V, E, D, P) where every vertex represents a location to be served and E contains unordered pairs (edges) of elements in V that indicate a simple road. The distances/cost and a weight that depict the provoked air pollution by a vehicle transition at every road are assigned to each road as well. These are the items of set D and P respectively. Furthermore the investigated pollution routes must not exceed predefined corresponding values concerning the route cost and the route pollution level during the vehicle transition. In this paper we present an algorithm that generates such routes in order that the decision maker selects the most appropriate one.

Keywords: bi-criteria, pollution, shortest paths, computation

Procedia PDF Downloads 374
1642 Particle Swarm Optimisation of a Terminal Synergetic Controllers for a DC-DC Converter

Authors: H. Abderrezek, M. N. Harmas

Abstract:

DC-DC converters are widely used as reliable power source for many industrial and military applications, computers and electronic devices. Several control methods were developed for DC-DC converters control mostly with asymptotic convergence. Synergetic control (SC) is a proven robust control approach and will be used here in a so-called terminal scheme to achieve finite time convergence. Lyapunov synthesis is adopted to assure controlled system stability. Furthermore particle swarm optimization (PSO) algorithm, based on an integral time absolute of error (ITAE) criterion will be used to optimize controller parameters. Simulation of terminal synergetic control of a DC-DC converter is carried out for different operating conditions and results are compared to classic synergetic control performance, that which demonstrate the effectiveness and feasibility of the proposed control method.

Keywords: DC-DC converter, PSO, finite time, terminal, synergetic control

Procedia PDF Downloads 503
1641 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 116
1640 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 161
1639 3D Model Completion Based on Similarity Search with Slim-Tree

Authors: Alexis Aldo Mendoza Villarroel, Ademir Clemente Villena Zevallos, Cristian Jose Lopez Del Alamo

Abstract:

With the advancement of technology it is now possible to scan entire objects and obtain their digital representation by using point clouds or polygon meshes. However, some objects may be broken or have missing parts; thus, several methods focused on this problem have been proposed based on Geometric Deep Learning, such as GCNN, ACNN, PointNet, among others. In this article an approach from a different paradigm is proposed, using metric data structures to index global descriptors in the spectral domain and allow the recovery of a set of similar models in polynomial time; to later use the Iterative Close Point algorithm and recover the parts of the incomplete model using the geometry and topology of the model with less Hausdorff distance.

Keywords: 3D reconstruction method, point cloud completion, shape completion, similarity search

Procedia PDF Downloads 122