Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9330

Search results for: minimum root mean square (RMS) error matching algorithm

7260 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 147
7259 A Memetic Algorithm Approach to Clustering in Mobile Wireless Sensor Networks

Authors: Masood Ahmad, Ataul Aziz Ikram, Ishtiaq Wahid

Abstract:

Wireless sensor network (WSN) is the interconnection of mobile wireless nodes with limited energy and memory. These networks can be deployed formany critical applications like military operations, rescue management, fire detection and so on. In flat routing structure, every node plays an equal role of sensor and router. The topology may change very frequently due to the mobile nature of nodes in WSNs. The topology maintenance may produce more overhead messages. To avoid topology maintenance overhead messages, an optimized cluster based mobile wireless sensor network using memetic algorithm is proposed in this paper. The nodes in this network are first divided into clusters. The cluster leaders then transmit data to that base station. The network is validated through extensive simulation study. The results show that the proposed technique has superior results compared to existing techniques.

Keywords: WSN, routing, cluster based, meme, memetic algorithm

Procedia PDF Downloads 481
7258 A Retrievable Genetic Algorithm for Efficient Solving of Sudoku Puzzles

Authors: Seyed Mehran Kazemi, Bahare Fatemi

Abstract:

Sudoku is a logic-based combinatorial puzzle game which is popular among people of different ages. Due to this popularity, computer softwares are being developed to generate and solve Sudoku puzzles with different levels of difficulty. Several methods and algorithms have been proposed and used in different softwares to efficiently solve Sudoku puzzles. Various search methods such as stochastic local search have been applied to this problem. Genetic Algorithm (GA) is one of the algorithms which have been applied to this problem in different forms and in several works in the literature. In these works, chromosomes with little or no information were considered and obtained results were not promising. In this paper, we propose a new way of applying GA to this problem which uses more-informed chromosomes than other works in the literature. We optimize the parameters of our GA using puzzles with different levels of difficulty. Then we use the optimized values of the parameters to solve various puzzles and compare our results to another GA-based method for solving Sudoku puzzles.

Keywords: genetic algorithm, optimization, solving Sudoku puzzles, stochastic local search

Procedia PDF Downloads 423
7257 Position and Speed Tracking of DC Motor Based on Experimental Analysis in LabVIEW

Authors: Muhammad Ilyas, Awais Khan, Syed Ali Raza Shah

Abstract:

DC motors are widely used in industries to provide mechanical power in speed and torque. The position and speed control of DC motors is getting the interest of the scientific community in robotics, especially in the robotic arm, a flexible joint manipulator. The current research work is based on position control of DC motors using experimental investigations in LabVIEW. The linear control strategy is applied to track the position and speed of the DC motor with comparative analysis in the LabVIEW platform and simulation analysis in MATLAB. The tracking error in hardware setup based on LabVIEW programming is slightly greater than simulation analysis in MATLAB due to the inertial load of the motor during steady-state conditions. The controller output shows the input voltage applied to the dc motor varies between 0-8V to ensure minimal steady error while tracking the position and speed of the DC motor.

Keywords: DC motor, labview, proportional integral derivative control, position tracking, speed tracking

Procedia PDF Downloads 106
7256 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time

Authors: Xinwen Zhu, Xingguang Li, Sun Yi

Abstract:

Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around  ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.

Keywords: LiDAR, depth camera, real-time, detection and measurement

Procedia PDF Downloads 224
7255 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 124
7254 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 335
7253 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization

Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang

Abstract:

Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.

Keywords: energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning

Procedia PDF Downloads 417
7252 Control of Stability for PV and Battery Hybrid System in Partial Shading

Authors: Weiying Wang, Qi Li, Huiwen Deng, Weirong Chen

Abstract:

The abrupt light change and uneven illumination will make the PV system get rid of constant output power, which will affect the efficiency of the grid connected inverter as well as the stability of the system. To solve this problem, this paper presents a strategy to control the stability of photovoltaic power system under the condition of partial shading of PV array, leading to constant power output, improving the capacity of resisting interferences. Firstly, a photovoltaic cell model considering the partial shading is established, and the backtracking search algorithm is used as the maximum power point to track algorithm under complex illumination. Then, the energy storage system based on the constant power control strategy is used to achieve constant power output. Finally, the effectiveness and correctness of the proposed control method are verified by the joint simulation of MATLAB/Simulink and RTLAB simulation platform.

Keywords: backtracking search algorithm, constant power control, hybrid system, partial shading, stability

Procedia PDF Downloads 297
7251 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti

Abstract:

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm

Procedia PDF Downloads 412
7250 Comparison of Techniques for Detection and Diagnosis of Eccentricity in the Air-Gap Fault in Induction Motors

Authors: Abrahão S. Fontes, Carlos A. V. Cardoso, Levi P. B. Oliveira

Abstract:

The induction motors are used worldwide in various industries. Several maintenance techniques are applied to increase the operating time and the lifespan of these motors. Among these, the predictive maintenance techniques such as Motor Current Signature Analysis (MCSA), Motor Square Current Signature Analysis (MSCSA), Park's Vector Approach (PVA) and Park's Vector Square Modulus (PVSM) are used to detect and diagnose faults in electric motors, characterized by patterns in the stator current frequency spectrum. In this article, these techniques are applied and compared on a real motor, which has the fault of eccentricity in the air-gap. It was used as a theoretical model of an electric induction motor without fault in order to assist comparison between the stator current frequency spectrum patterns with and without faults. Metrics were purposed and applied to evaluate the sensitivity of each technique fault detection. The results presented here show that the above techniques are suitable for the fault of eccentricity in the air gap, whose comparison between these showed the suitability of each one.

Keywords: eccentricity in the air-gap, fault diagnosis, induction motors, predictive maintenance

Procedia PDF Downloads 350
7249 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks

Authors: Emad A. Mohammed

Abstract:

The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.

Keywords: permeability, hydraulic flow units, artificial intelligence, correlation

Procedia PDF Downloads 136
7248 Structural Analysis of a Composite Wind Turbine Blade

Authors: C. Amer, M. Sahin

Abstract:

The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.

Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing

Procedia PDF Downloads 426
7247 The Effect of Accounting Conservatism on Cost of Capital: A Quantile Regression Approach for MENA Countries

Authors: Maha Zouaoui Khalifa, Hakim Ben Othman, Hussaney Khaled

Abstract:

Prior empirical studies have investigated the economic consequences of accounting conservatism by examining its impact on the cost of equity capital (COEC). However, findings are not conclusive. We assume that inconsistent results of such association may be attributed to the regression models used in data analysis. To address this issue, we re-examine the effect of different dimension of accounting conservatism: unconditional conservatism (U_CONS) and conditional conservatism (C_CONS) on the COEC for a sample of listed firms from Middle Eastern and North Africa (MENA) countries, applying quantile regression (QR) approach developed by Koenker and Basset (1978). While classical ordinary least square (OLS) method is widely used in empirical accounting research, however it may produce inefficient and bias estimates in the case of departures from normality or long tail error distribution. QR method is more powerful than OLS to handle this kind of problem. It allows the coefficient on the independent variables to shift across the distribution of the dependent variable whereas OLS method only estimates the conditional mean effects of a response variable. We find as predicted that U_CONS has a significant positive effect on the COEC however, C_CONS has a negative impact. Findings suggest also that the effect of the two dimensions of accounting conservatism differs considerably across COEC quantiles. Comparing results from QR method with those of OLS, this study throws more lights on the association between accounting conservatism and COEC.

Keywords: unconditional conservatism, conditional conservatism, cost of equity capital, OLS, quantile regression, emerging markets, MENA countries

Procedia PDF Downloads 355
7246 Impact of Nano-Anatase TiO₂ on the Germination Indices and Seedling Growth of Some Plant Species

Authors: Rayhaneh Amooaghaie, Maryam Norouzi

Abstract:

In this study, the effects of nTiO₂ on seed germination and growth of six plant species (wheat, soybean, tomato, canola, cucumber, and lettuce) were evaluated in petri dish (direct exposure) and in soil in a greenhouse experiment (soil exposure). Data demonstrate that under both culture conditions, low or mild concentrations of nTiO₂ either stimulated or had no effect on seed germination, root growth and vegetative biomass while high concentrations had an inhibitory effect. However, results showed that the impacts of nTiO₂ on plant growth in soil were partially consistent with those observed in pure culture. Based on both experiment sets, among above six species, lettuce and canola were the most susceptible and the most tolerant species to nTiO₂ toxicity. However, results revealed the impacts of nTiO₂ on plant growth in soil were less than petri dish exposure probability due to dilution in soil and complexation/aggregation of nTiO₂ that would lead to lower exposure of plants. The high concentrations of nTiO₂ caused significant reductions in fresh and dry weight of aerial parts and root and chlorophyll and carotenoids contents of all species which also coincided with further accumulation of malondialdehyde (MDA). These findings suggest that decreasing growth might be the result of an nTiO₂-induced oxidative stress and disturbance of photosynthesis systems.

Keywords: chlorophyll, lipid peroxidation, nano TiO₂, seed germination

Procedia PDF Downloads 165
7245 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping

Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope

Abstract:

The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.

Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing

Procedia PDF Downloads 82
7244 Automatic Censoring in K-Distribution for Multiple Targets Situations

Authors: Naime Boudemagh, Zoheir Hammoudi

Abstract:

The parameters estimation of the K-distribution is an essential part in radar detection. In fact, presence of interfering targets in reference cells causes a decrease in detection performances. In such situation, the estimate of the shape and the scale parameters are far from the actual values. In the order to avoid interfering targets, we propose an Automatic Censoring (AC) algorithm of radar interfering targets in K-distribution. The censoring technique used in this work offers a good discrimination between homogeneous and non-homogeneous environments. The homogeneous population is then used to estimate the unknown parameters by the classical Method of Moment (MOM). The AC algorithm does not need any prior information about the clutter parameters nor does it require both the number and the position of interfering targets. The accuracy of the estimation parameters obtained by this algorithm are validated and compared to various actual values of the shape parameter, using Monte Carlo simulations, this latter show that the probability of censing in multiple target situations are in good agreement.

Keywords: parameters estimation, method of moments, automatic censoring, K distribution

Procedia PDF Downloads 373
7243 Analysis of Radial Pulse Using Nadi-Parikshan Yantra

Authors: Ashok E. Kalange

Abstract:

Diagnosis according to Ayurveda is to find the root cause of a disease. Out of the eight different kinds of examinations, Nadi-Pariksha (pulse examination) is important. Nadi-Pariksha is done at the root of the thumb by examining the radial artery using three fingers. Ancient Ayurveda identifies the health status by observing the wrist pulses in terms of 'Vata', 'Pitta' and 'Kapha', collectively called as tridosha, as the basic elements of human body and in their combinations. Diagnosis by traditional pulse analysis – NadiPariksha - requires a long experience in pulse examination and a high level of skill. The interpretation tends to be subjective, depending on the expertise of the practitioner. Present work is part of the efforts carried out in making Nadi-Parikshan objective. Nadi Parikshan Yantra (three point pulse examination system) is developed in our laboratory by using three pressure sensors (one each for the Vata, Pitta and Kapha points on radial artery). The radial pulse data was collected of a large number of subjects. The radial pulse data collected is analyzed on the basis of relative amplitudes of the three point pulses as well as in frequency and time domains. The same subjects were examined by Ayurvedic physician (Nadi Vaidya) and the dominant Dosha - Vata, Pitta or Kapha - was identified. The results are discussed in details in the paper.

Keywords: Nadi Parikshan Yantra, Tridosha, Nadi Pariksha, human pulse data analysis

Procedia PDF Downloads 189
7242 RGB-D SLAM Algorithm Based on pixel level Dense Depth Map

Authors: Hao Zhang, Hongyang Yu

Abstract:

Scale uncertainty is a well-known challenging problem in visual SLAM. Because RGB-D sensor provides depth information, RGB-D SLAM improves this scale uncertainty problem. However, due to the limitation of physical hardware, the depth map output by RGB-D sensor usually contains a large area of missing depth values. These missing depth information affect the accuracy and robustness of RGB-D SLAM. In order to reduce these effects, this paper completes the missing area of the depth map output by RGB-D sensor and then fuses the completed dense depth map into ORB SLAM2. By adding the process of obtaining pixel-level dense depth maps, a better RGB-D visual SLAM algorithm is finally obtained. In the process of obtaining dense depth maps, a deep learning model of indoor scenes is adopted. Experiments are conducted on public datasets and real-world environments of indoor scenes. Experimental results show that the proposed SLAM algorithm has better robustness than ORB SLAM2.

Keywords: RGB-D, SLAM, dense depth, depth map

Procedia PDF Downloads 140
7241 Allocating Channels and Flow Estimation at Flood Prone Area in Desert, Example from AlKharj City, Saudi Arabia

Authors: Farhan Aljuaidi

Abstract:

The rapid expansion of Alkarj city, Saudi Arabia, towards the outlet of Wadi AlAin is critical for the planners and decision makers. Nowadays, two major projects such as Salman bin Abdulaziz University compound and new industrial area are developed in this flood prone area where no channels are clear and identified. The main contribution of this study is to divert the flow away from these vital projects by reconstructing new channels. To do so, Lidar data were used to generate contour lines for the actual elevation of the highways and local roads. These data were analyzed and compared to the contour lines derived from the topographical maps 1:50.000. The magnitude of the expected flow was estimated using Snyder's Model based on the morphometric data acquired by DEM of the catchment area. The results indicate that maximum discharge peak reaches 2694,3 m3/sec, the mean is 303,7 m3/sec and the minimum is 74,3 m3/sec. The runoff was estimated at 252,2. 610 m3/s, the mean is 41,5. 610 m3/s and the minimum is 12,4. 610 m3/s.

Keywords: Desert flood, Saudi Arabia, Snyder's Model, flow estimation

Procedia PDF Downloads 309
7240 Implementation of MPPT Algorithm for Grid Connected PV Module with IC and P&O Method

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

In recent years, the use of renewable energy resources instead of pollutant fossil fuels and other forms has increased. Photovoltaic generation is becoming increasingly important as a renewable resource since it does not cause in fuel costs, pollution, maintenance, and emitting noise compared with other alternatives used in power applications. In this paper, Perturb and Observe and Incremental Conductance methods are used to improve energy conversion efficiency under different environmental conditions. PI controllers are used to control easily DC-link voltage, active and reactive currents. The whole system is simulated under standard climatic conditions (1000 W/m2, 250C) in MATLAB and the irradiance is varied from 1000 W/m2 to 300 W/m2. The use of PI controller makes it easy to directly control the power of the grid connected PV system. Finally the validity of the system will be verified through the simulations in MATLAB/Simulink environment.

Keywords: incremental conductance algorithm, modeling of PV panel, perturb and observe algorithm, photovoltaic system and simulation results

Procedia PDF Downloads 509
7239 Loading Methodology for a Capacity Constrained Job-Shop

Authors: Viraj Tyagi, Ajai Jain, P. K. Jain, Aarushi Jain

Abstract:

This paper presents a genetic algorithm based loading methodology for a capacity constrained job-shop with the consideration of alternative process plans for each part to be produced. Performance analysis of the proposed methodology is carried out for two case studies by considering two different manufacturing scenarios. Results obtained indicate that the methodology is quite effective in improving the shop load balance, and hence, it can be included in the frameworks of manufacturing planning systems of job-shop oriented industries.

Keywords: manufacturing planning, loading, genetic algorithm, job shop

Procedia PDF Downloads 301
7238 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
7237 Screening of Hypertension, Risks, Knowledge/Awareness in Second Cycle Schools in Ghana: A National Cross-Sectional Study Among Students Aged 12–22

Authors: Cecilia Amponsem-Boateng, Timothy Bonney Oppongx, Weidong Zhang, Jonathan Boakye Yiadom, Lianke Wang, Kwabena Acheampong, Godfrey Opolot

Abstract:

In Ghana, the management of hypertension in primary health care is a cost-effective way of addressing premature deaths from vascular disorders that include hypertension. There is little or no evidence of large-scale studies on the prevalence, risk, and knowledge/awareness of hypertension in students aged 12–22 years in Ghana. In a cross-sectional study, blood pressure, anthropometric indices, and knowledge/awareness assessment of students at second-cycle schools were recorded from 2018 to 2020 in three regions of Ghana. Multistage cluster sampling was used in selecting regions and the schools. Prevalence of prehypertension and hypertension was categorized by the Joint National Committee 7, where appropriate, chi-square, scatter plots, and correlations were used in showing associations. A total of 3165 students comprising 1776 (56.1%) females and 1389 (43.9%) males participated in this study within three regions of Ghana. The minimum age was 12 years and the maximum age was 22 years. The mean age was 17.21 with standard deviation (SD: 1.59) years. A 95% confidence interval was set for estimations and a P value < 0.05 was set as significant. The prevalence rate of overall hypertension was 19.91% and elevated (prehypertension) was 26.07%. Risk indicators such as weight, BMI, waist circumference, physical activity, and form of the diet were positively correlated with hypertension. Among Ghanaian students currently in second-cycle educational institutions, 19.91% were hypertensive and 26.07% were prehypertensive. This may indicate a probable high prevalence of hypertension in the future adult population if measures are not taken to curb the associated risks.

Keywords: hypertension, second-cycle schools, Ghana, youth

Procedia PDF Downloads 83
7236 Reducing the Urban Heat Island Effect by Urban Design Strategies: Case Study of Aksaray Square in Istanbul

Authors: Busra Ekinci

Abstract:

Urban heat island term becomes one of the most important problem in urban areas as a reflection of global warming in local scale last years. Many communities and governments are taking action to reduce heat island effects on urban areas where the half of the world's population live today. At this point, urban design turned out to be an important practice and research area for providing an environmentally sensitive urban development. In this study, mitigating strategies of urban heat island effects by urban design are investigated in Aksaray Square and surroundings in Istanbul. Aksaray is an important historical and commercial center of Istanbul, which has an increasing density due to be the node of urban transportation. Also, Istanbul Metropolitan Municipality prepared an urban design project to respond the needs of growing population in the area for 2018. The purpose of the study is emphasizing the importance of urban design objectives and strategies that are developed to reduce the heat island effects on urban areas. Depending on this, the urban heat island effect of the area was examined based on the albedo (reflectivity) parameter which is the most effective parameter in the formation of the heat island effect in urban areas. Albedo values were calculated by Albedo Viewer web application model that was developed by Energy and Environmental Engineering Department of Kyushu University in Japan. Albedo parameter had examined for the present situation and the planned situation with urban design project. The results show that, the current area has urban heat island potential. With the Aksaray Square Project, the heat island effect on the area can be reduced, but would not be completely prevented. Therefore, urban design strategies had been developed to reduce the island effect in addition to the urban design project of the area. This study proves that urban design objectives and strategies are quite effective to reduce the heat island effects, which negatively affect the social environment and quality of life in urban areas.

Keywords: Albedo, urban design, urban heat island, sustainable design

Procedia PDF Downloads 580
7235 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 219
7234 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa

Authors: Maria Bonolo Mathevula

Abstract:

Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.

Keywords: foundation phase, visual perceptual skills, school children, refractive error

Procedia PDF Downloads 102
7233 An Optimization Algorithm for Reducing the Liquid Oscillation in the Moving Containers

Authors: Reza Babajanivalashedi, Stefania Lo Feudo, Jean-Luc Dion

Abstract:

Liquid sloshing is a crucial problem for the dynamic of moving containers in the packaging industries. Sloshing issues have been so far mainly modeled within the framework of fluid dynamics or by using equivalent mechanical models with different kinds of movements and shapes of containers. Nevertheless, these approaches do not allow to determinate the shape of the free surface of the liquid in case of the irregular shape of the moving containers, so that experimental measurements may be required. If there is too much slosh in the moving tank, the liquid can be splashed out on the packages. So, the free surface oscillation must be controlled/reduced to eliminate the splashing. The purpose of this research is to propose an optimization algorithm for finding an optimum command law to reduce surface elevation. In the first step, the free surface of the liquid is simulated based on the separation variable and weak formulation models. Then Genetic and Gradient algorithms are developed for finding the optimum command law. The optimum command law is compared with existing command laws, and the results show that there is a significant difference in surface oscillation between optimum and existing command laws. This algorithm is applicable for different varieties of bottles in case of using the camera for detecting the liquid elevation, and it can produce new command laws for different kinds of tanks to reduce the surface oscillation and remove the splashing phenomenon.

Keywords: sloshing phenomenon, separation variables, weak formulation, optimization algorithm, command law

Procedia PDF Downloads 151
7232 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller

Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland

Abstract:

This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.

Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace

Procedia PDF Downloads 336
7231 Assimilating Multi-Mission Satellites Data into a Hydrological Model

Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn

Abstract:

Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.

Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF

Procedia PDF Downloads 289