Search results for: Gaussian pulses
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 469

Search results for: Gaussian pulses

169 Artificial Intelligent Methodology for Liquid Propellant Engine Design Optimization

Authors: Hassan Naseh, Javad Roozgard

Abstract:

This paper represents the methodology based on Artificial Intelligent (AI) applied to Liquid Propellant Engine (LPE) optimization. The AI methodology utilized from Adaptive neural Fuzzy Inference System (ANFIS). In this methodology, the optimum objective function means to achieve maximum performance (specific impulse). The independent design variables in ANFIS modeling are combustion chamber pressure and temperature and oxidizer to fuel ratio and output of this modeling are specific impulse that can be applied with other objective functions in LPE design optimization. To this end, the LPE’s parameter has been modeled in ANFIS methodology based on generating fuzzy inference system structure by using grid partitioning, subtractive clustering and Fuzzy C-Means (FCM) clustering for both inferences (Mamdani and Sugeno) and various types of membership functions. The final comparing optimization results shown accuracy and processing run time of the Gaussian ANFIS Methodology between all methods.

Keywords: ANFIS methodology, artificial intelligent, liquid propellant engine, optimization

Procedia PDF Downloads 544
168 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 214
167 Performance Assessment of a Variable-Flux Permanent-Magnet Memory Motor

Authors: Michel Han, Christophe Besson, Alain Savary, Yvan Becher

Abstract:

The variable flux permanent magnet synchronous motor (VF-PMSM), also called "Memory Motor", is a new generation of motor capable of modifying the magnetization state with short pulses of current during operation or standstill. The impact of such operation is the expansion of the operating range in the torque-speed characteristic and an improvement in energy efficiency at high-speed in comparison to conventional permanent magnet synchronous machines (PMSMs). This paper reviews the operating principle and the unique features of the proposed memory motor. The benefits of this concept are highlighted by comparing the performance of the rotor of the VF-PMSM to that of two PM rotors that are typically found in the industry. The investigation emphasizes the properties of the variable magnetization and presents the comparison of the torque-speed characteristic with the capability of loss reduction in a VF-PMSM by means of experimental results, especially when tests are conducted under identical conditions for each rotor (same stator, same inverter and same experimental setup). The experimental results demonstrated that the VF-PMSM gives an additional degree of freedom to optimize the efficiency over a wide speed range. Thus, with a design easy to manufacture and with the possibility of controlling the magnetization and the demagnetization of the magnets during operations, the VF-PMSM can be interesting for various applications.

Keywords: efficiency, magnetization state, memory motors, performances, permanent-magnet, synchronous machine, variable-flux, variable magnetization, wide speed application

Procedia PDF Downloads 166
166 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis

Authors: Amir Hajian, Sepehr Damavandinejadmonfared

Abstract:

In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.

Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)

Procedia PDF Downloads 341
165 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 278
164 An Improved Tracking Approach Using Particle Filter and Background Subtraction

Authors: Amir Mukhtar, Dr. Likun Xia

Abstract:

An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.

Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination

Procedia PDF Downloads 357
163 An Approach For Evolving a Relaible Low Power Ultra Wide Band Transmitter with Capacitve Sensing

Authors: N.Revathy, C.Gomathi

Abstract:

This work aims for a tunable capacitor as a sensor which can vary the control voltage of a voltage control oscillator in a ultra wide band (UWB) transmitter. In this paper power consumption is concentrated. The reason for choosing a capacitive sensing is it give slow temperature drift, high sensitivity and robustness. Previous works report a resistive sensing in a voltage control oscillator (VCO) not aiming at power consumption. But this work aims for power consumption of a capacitive sensing in ultra wide band transmitter. The ultra wide band transmitter to be used is a direct modulation of pulses. The VCO which is the heart of pulse generator of UWB transmitter works on the principle of voltage to frequency conversion. The VCO has and odd number of inverter stages which works on the control voltage input this input is now from a variable capacitor and the buffer stages is reduced from the previous work to maintain the oscillating frequency. The VCO is also aimed to consume low power. Then the concentration in choosing a variable capacitor is aimed. A compact model of a capacitor with the transient characteristics is to be designed with a movable dielectric and multi metal membranes. Previous modeling of the capacitor transient characteristics is with a movable membrane and a fixed membrane. This work aims at a membrane with a wide tuning suitable for ultra wide band transmitter.This is used in this work because a capacitive in a ultra wide transmitter need to be tuned in such a way that all satisfies FCC regulations.

Keywords: capacitive sensing, ultra wide band transmitter, voltage control oscillator, FCC regulation

Procedia PDF Downloads 375
162 General Purpose Graphic Processing Units Based Real Time Video Tracking System

Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai

Abstract:

Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.

Keywords: connected components, embrace threads, local weighted kernel, structuring elements

Procedia PDF Downloads 412
161 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy

Authors: Kemal Polat

Abstract:

In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.

Keywords: machine learning, data weighting, classification, data mining

Procedia PDF Downloads 305
160 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 463
159 A Single Stage Rocket Using Solid Fuels in Conventional Propulsion Systems

Authors: John R Evans, Sook-Ying Ho, Rey Chin

Abstract:

This paper describes the research investigations orientated to the starting and propelling of a solid fuel rocket engine which operates as combined cycle propulsion system using three thrust pulses. The vehicle has been designed to minimise the cost of launching small number of Nano/Cube satellites into low earth orbits (LEO). A technology described in this paper is a ground-based launch propulsion system which starts the rocket vertical motion immediately causing air flow to enter the ramjet’s intake. Current technology has a ramjet operation predicted to be able to start high subsonic speed of 280 m/s using a liquid fuel ramjet (LFRJ). The combined cycle engine configuration is in many ways fundamentally different from the LFRJ. A much lower subsonic start speed is highly desirable since the use of a mortar to obtain the latter speed for rocket means a shorter launcher length can be utilized. This paper examines the means and has some performance calculations, including Computational Fluid Dynamics analysis of air-intake at suitable operational conditions, 3-DOF point mass trajectory analysis of multi-pulse propulsion system (where pulse ignition time and thrust magnitude can be controlled), etc. of getting a combined cycle rocket engine use in a single stage vehicle.

Keywords: combine cycle propulsion system, low earth orbit launch vehicle, computational fluid dynamics analysis, 3dof trajectory analysis

Procedia PDF Downloads 163
158 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: airborne laser scanning, digital terrain models, filtering, forested areas

Procedia PDF Downloads 120
157 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis

Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu

Abstract:

Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.

Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding

Procedia PDF Downloads 136
156 Shallow Water Lidar System in Measuring Erosion Rate of Coarse-Grained Materials

Authors: Ghada S. Ellithy, John. W. Murphy, Maureen K. Corcoran

Abstract:

Erosion rate of soils during a levee or dam overtopping event is a major component in risk assessment evaluation of breach time and downstream consequences. The mechanism and evolution of dam or levee breach caused by overtopping erosion is a complicated process and difficult to measure during overflow due to accessibility and quickly changing conditions. In this paper, the results of a flume erosion tests are presented and discussed. The tests are conducted on a coarse-grained material with a median grain size D50 of 5 mm in a 1-m (3-ft) wide flume under varying flow rates. Each test is performed by compacting the soil mix r to its near optimum moisture and dry density as determined from standard Proctor test in a box embedded in the flume floor. The box measures 0.45 m wide x 1.2 m long x 0.25 m deep. The material is tested several times at varying hydraulic loading to determine the erosion rate after equal time intervals. The water depth, velocity are measured at each hydraulic loading, and the acting bed shear is calculated. A shallow water lidar (SWL) system was utilized to record the progress of soil erodibility and water depth along the scanned profiles of the tested box. SWL is a non-contact system that transmits laser pulses from above the water and records the time-delay between top and bottom reflections. Results from the SWL scans are compared with before and after manual measurements to determine the erosion rate of the soil mix and other erosion parameters.

Keywords: coarse-grained materials, erosion rate, LIDAR system, soil erosion

Procedia PDF Downloads 93
155 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size

Authors: Jude Opara, Esemokumo Perewarebo Akpos

Abstract:

This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.

Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS

Procedia PDF Downloads 277
154 The Effect of Salinity on Symbiotic Nitrogen Fixation in Alfalfa and Faba Bean

Authors: Mouffok Ahlem, Belhamra Mohamed, Mouffok Sihem

Abstract:

The use of nitrogen fertilizers inevitable consequence, the increase in the nitrate content of water, which may contribute to the production of nitrite and the formation of carcinogenic nitrosamines. The nitrogen fertilizer may also affect the structure and function of the microbial community. And the fight against eutrophication of aquatic environments represents a cost to the student statements. The agronomic, ecological and economic legumes such as faba beans and alfalfa are not demonstrated, especially in the case of semi-arid and arid areas. Osmotic stress due to drought and / or salinity deficit, nutritional deficiencies is the major factors limiting symbiotic nitrogen fixation and productivity of pulses. To study the symbiotic nitrogen fixation in faba bean (Vicia faba L.) and alfalfa (Medicago sativa L.) in the region of Biskra, we used soil samples collected from 30 locations. This work has identified several issues of ecological and agronomic interest. Evaluation of symbiotic potential of soils in the region of Biskra; by trapping technique, show different levels of susceptibility to rhizobial microflora. The effectiveness of the rhizobial symbiosis in both legumes indicates that air dry biomass and the amount of nitrogen accumulated in the aerial part, depends mainly on the rate of nodulation and regardless of the species and locality. The correlation between symbiotic nitrogen fixation and some physico-chemical properties of soils shows that symbiotic nitrogen fixation in both legumes is strongly related to soil conditions of the soil. Salinity disrupts the physiological process of growth, development and more particularly that of the symbiotic fixation of atmospheric nitrogen. Against by phosphorus promotes rhizobial symbiosis.

Keywords: rhizobia, faba bean, alfalfa, salinity

Procedia PDF Downloads 430
153 Improvement of Bone Scintography Image Using Image Texture Analysis

Authors: Yousif Mohamed Y. Abdallah, Eltayeb Wagallah

Abstract:

Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve.

Keywords: bone scan, nuclear medicine, Matlab, image processing technique

Procedia PDF Downloads 478
152 Coherent All-Fiber and Polarization Maintaining Source for CO2 Range-Resolved Differential Absorption Lidar

Authors: Erwan Negre, Ewan J. O'Connor, Juha Toivonen

Abstract:

The need for CO2 monitoring technologies grows simultaneously with the worldwide concerns regarding environmental challenges. To that purpose, we developed a compact coherent all-fiber ranged-resolved Differential Absorption Lidar (RR-DIAL). It has been designed along a tunable 2x1fiber optic switch set to a frequency of 1 Hz between two Distributed FeedBack (DFB) lasers emitting in the continuous-wave mode at 1571.41 nm (absorption line of CO2) and 1571.25 nm (CO2 absorption-free line), with linewidth and tuning range of respectively 1 MHz and 3 nm over operating wavelength. A three stages amplification through Erbium and Erbium-Ytterbium doped fibers coupled to a Radio Frequency (RF) driven Acousto-Optic Modulator (AOM) generates 100 ns pulses at a repetition rate from 10 to 30 kHz with a peak power up to 2.5 kW and a spatial resolution of 15 m, allowing fast and highly resolved CO2 profiles. The same afocal collection system is used for the output of the laser source and the backscattered light which is then directed to a circulator before being mixed with the local oscillator for heterodyne detection. Packaged in an easily transportable box which also includes a server and a Field Programmable Gate Array (FPGA) card for on-line data processing and storing, our setup allows an effective and quick deployment for versatile in-situ analysis, whether it be vertical atmospheric monitoring, large field mapping or sequestration site continuous oversight. Setup operation and results from initial field measurements will be discussed.

Keywords: CO2 profiles, coherent DIAL, in-situ atmospheric sensing, near infrared fiber source

Procedia PDF Downloads 108
151 Human Action Recognition Using Wavelets of Derived Beta Distributions

Authors: Neziha Jaouedi, Noureddine Boujnah, Mohamed Salim Bouhlel

Abstract:

In the framework of human machine interaction systems enhancement, we focus throw this paper on human behavior analysis and action recognition. Human behavior is characterized by actions and reactions duality (movements, psychological modification, verbal and emotional expression). It’s worth noting that many information is hidden behind gesture, sudden motion points trajectories and speeds, many research works reconstructed an information retrieval issues. In our work we will focus on motion extraction, tracking and action recognition using wavelet network approaches. Our contribution uses an analysis of human subtraction by Gaussian Mixture Model (GMM) and body movement through trajectory models of motion constructed from kalman filter. These models allow to remove the noise using the extraction of the main motion features and constitute a stable base to identify the evolutions of human activity. Each modality is used to recognize a human action using wavelets of derived beta distributions approach. The proposed approach has been validated successfully on a subset of KTH and UCF sports database.

Keywords: feautures extraction, human action classifier, wavelet neural network, beta wavelet

Procedia PDF Downloads 386
150 Active Contours for Image Segmentation Based on Complex Domain Approach

Authors: Sajid Hussain

Abstract:

The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models.

Keywords: image segmentation, active contour, level set, Mumford and Shah model

Procedia PDF Downloads 74
149 Motion-Based Detection and Tracking of Multiple Pedestrians

Authors: A. Harras, A. Tsuji, K. Terada

Abstract:

Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%.

Keywords: automatic detection, tracking, pedestrians, counting

Procedia PDF Downloads 233
148 Diversity and Utilize of Ignored, Underutilized, and Uncommercialized Horticultural Species in Nepal

Authors: Prakriti Chand, Binayak Prasad Rajbhandari, Ram Prasad Mainali

Abstract:

Local indigenous community in Lalitpur, Nepal, use Ignored, Underutilized and Uncommercialized Horticultural Species (IUUHS) for medicine, food, spice, pickles, and religious purposes. But, research and exploration about usage, status, potentialities, and importance of these future sustainable crops are inadequately documented and have been ignored for a positive food transformation system. The study aimed to assess the use and diversity of NUWHS in terms of current status investigation, documentation, management, and future potentialities of IUUHS. A wide range of participatory tools through the household survey ( 100 respondents), 8 focus group discussions, 20 key informant interviews was followed by individual assessment, participatory rural assessments and supplemented by literature review. This study recorded 95 IUUHS belonging to 43 families, of which 92 were angiosperms, 2 pteridophytes, and 1 gymnosperm. Twenty seven species had multiple uses. The IUUHS observed during the study were 31 vegetables, 20 fruits, 14 wild species, 7 spices, 7 pulses, 7 pickle, 7 medicine, and 2 religious species. Vegetables and fruits were the most observed category of IUUHS. Eighty nine species were observed as medicinally valued species, and 86% of the women had taken over all the agricultural activities. 84% of respondents used these species during food deficient period. IUUHS have future potential as an alternative food to major staple crops due to its remarkable ability to be adapted in marginal soil and thrive harsh climatic condition. There are various constraints regarding the utilization and development of IUUHS, which needs initiation of promotion, utilization, management, and conservation of species from the grass root level.

Keywords: agrobiodiversity, Ignored and underutilized species, uncultivated horticultural species, diversity use

Procedia PDF Downloads 240
147 Software-Defined Architecture and Front-End Optimization for DO-178B Compliant Distance Measuring Equipment

Authors: Farzan Farhangian, Behnam Shakibafar, Bobda Cedric, Rene Jr. Landry

Abstract:

Among the air navigation technologies, many of them are capable of increasing aviation sustainability as well as accuracy improvement in Alternative Positioning, Navigation, and Timing (APNT), especially avionics Distance Measuring Equipment (DME), Very high-frequency Omni-directional Range (VOR), etc. The integration of these air navigation solutions could make a robust and efficient accuracy in air mobility, air traffic management and autonomous operations. Designing a proper RF front-end, power amplifier and software-defined transponder could pave the way for reaching an optimized avionics navigation solution. In this article, the possibility of reaching an optimum front-end to be used with single low-cost Software-Defined Radio (SDR) has been investigated in order to reach a software-defined DME architecture. Our software-defined approach uses the firmware possibilities to design a real-time software architecture compatible with a Multi Input Multi Output (MIMO) BladeRF to estimate an accurate time delay between a Transmission (Tx) and the reception (Rx) channels using the synchronous scheduled communication. We could design a novel power amplifier for the transmission channel of the DME to pass the minimum transmission power. This article also investigates designing proper pair pulses based on the DO-178B avionics standard. Various guidelines have been tested, and the possibility of passing the certification process for each standard term has been analyzed. Finally, the performance of the DME was tested in the laboratory environment using an IFR6000, which showed that the proposed architecture reached an accuracy of less than 0.23 Nautical mile (Nmi) with 98% probability.

Keywords: avionics, DME, software defined radio, navigation

Procedia PDF Downloads 51
146 Cognitive Effects of Repetitive Transcranial Magnetic Stimulation in Patients with Parkinson's Disease

Authors: Ana Munguia, Gerardo Ortiz, Guadalupe Gonzalez, Fiacro Jimenez

Abstract:

Parkinson's disease (PD) is a neurodegenerative disorder that causes motor and cognitive symptoms. The first-choice treatment for these patients is pharmacological, but this generates several side effects. Because of that new treatments were introduced such as Repetitive Transcranial Magnetic Stimulation (rTMS) in order to improve the life quality of the patients. Several studies suggest significant changes in motor symptoms. However, there is a great diversity in the number of pulses, amplitude, frequency and stimulation targets, which results in inconsistent data. In addition, these studies do not have an analysis of the neuropsychological effects of the treatment. The main purpose of this study is to evaluate the impact of rTMS on the cognitive performance of 6 patients with H&Y III and IV (45-65 years, 3 men and 3 women). An initial neuropsychological and neurological evaluation was performed. Patients were randomized into two groups; in the first phase one received rTMS in the supplementary motor area, the other group in the dorsolateral prefrontal cortex contralateral to the most affected hemibody. In the second phase, each group received the stimulation in the area that he had not been stimulated previously. Reassessments were carried out at the beginning, at the end of each phase and a follow-up was carried out 6 months after the conclusion of the stimulation. In these preliminary results, it is reported that there's no statistically significant difference before and after receiving rTMS in the neuropsychological test scores of the patients, which suggests that the cognitive performance of patients is not detrimental. There are even tendencies towards an improvement in executive functioning after the treatment. What added to motor improvement, showed positive effects in the activities of the patients' daily life. In a later and more detailed analysis, will be evaluated the effects in each of the patients separately in relation to the functionality of the patients in their daily lives.

Keywords: Parkinson's disease, rTMS, cognitive, treatment

Procedia PDF Downloads 123
145 Efficiency Improvement for Conventional Rectangular Horn Antenna by Using EBG Technique

Authors: S. Kampeephat, P. Krachodnok, R. Wongsan

Abstract:

The conventional rectangular horn has been used for microwave antenna a long time. Its gain can be increased by enlarging the construction of horn to flare exponentially. This paper presents a study of the shaped woodpile Electromagnetic Band Gap (EBG) to improve its gain for conventional horn without construction enlargement. The gain enhancement synthesis method for shaped woodpile EBG that has to transfer the electromagnetic fields from aperture of a horn antenna through woodpile EBG is presented by using the variety of shaped woodpile EBGs such as planar, triangular, quadratic, circular, gaussian, cosine, and squared cosine structures. The proposed technique has the advantages of low profile, low cost for fabrication and light weight. The antenna characteristics such as reflection coefficient (S11), radiation patterns and gain are simulated by utilized A Computer Simulation Technology (CST) software. With the proposed concept, an antenna prototype was fabricated and experimented. The S11 and radiation patterns obtained from measurements show a good impedance matching and a gain enhancement of the proposed antenna. The gain at dominant frequency of 10 GHz is 25.6 dB, application for X- and Ku-Band Radar, that higher than the gain of the basic rectangular horn antenna around 8 dB with adding only one appropriated EBG structures.

Keywords: conventional rectangular horn antenna, electromagnetic band gap, gain enhancement, X- and Ku-band radar

Procedia PDF Downloads 248
144 Optical Breather in Phosphorene Monolayer

Authors: Guram Adamashvili

Abstract:

Surface plasmon polariton is a surface optical wave which undergoes a strong enhancement and spatial confinement of its wave amplitude near an interface of two-dimensional layered structures. Phosphorene (single-layer black phosphorus) and other two-dimensional anisotropic phosphorene-like materials are recognized as promising materials for potential future applications of surface plasmon polariton. A theory of an optical breather of self-induced transparency for surface plasmon polariton propagating in monolayer or few-layer phosphorene is developed. A theory of an optical soliton of self-induced transparency for surface plasmon polariton propagating in monolayer or few-layer phosphorene have been investigated earlier Starting from the optical nonlinear wave equation for surface TM-modes interacting with a two-dimensional layer of atomic systems or semiconductor quantum dots and a phosphorene monolayer (or other two-dimensional anisotropic material), we have obtained the evolution equations for the electric field of the breather. In this case, one finds that the evolution of these pulses become described by the damped Bloch-Maxwell equations. For surface plasmon polariton fields, breathers are found to occur. Explicit relations of the dependence of breathers on the local media, phosphorene anisotropic conductivity, transition layer properties and transverse structures of the SPP, are obtained and will be given. It is shown that the phosphorene conductivity reduces exponentially the amplitude of the surface breather of SIT in the process of propagation. The direction of propagation corresponding to the maximum and minimum damping of the amplitude are assigned along the armchair and zigzag directions of black phosphorus nano-film, respectively. The most rapid damping of the intensity occurs when the polarization of breather is along the armchair direction.

Keywords: breathers, nonlinear waves, solitons, surface plasmon polaritons

Procedia PDF Downloads 121
143 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP

Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas

Abstract:

In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.

Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images

Procedia PDF Downloads 420
142 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.

Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection

Procedia PDF Downloads 271
141 Preparation of Nano-Scaled linbo3 by Polyol Method

Authors: Gabriella Dravecz, László Péter, Zsolt Kis

Abstract:

Abstract— The growth of optical LiNbO3 single crystal and its physical and chemical properties are well known on the macroscopic scale. Nowadays the rare-earth doped single crystals became important for coherent quantum optical experiments: electromagnetically induced transparency, slow down of light pulses, coherent quantum memory. The expansion of applications is increasingly requiring the production of nano scaled LiNbO3 particles. For example, rare-earth doped nanoscaled particles of lithium niobate can be act like single photon source which can be the bases of a coding system of the quantum computer providing complete inaccessibility to strangers. The polyol method is a chemical synthesis where oxide formation occurs instead of hydroxide because of the high temperature. Moreover the polyol medium limits the growth and agglomeration of the grains producing particles with the diameter of 30-200 nm. In this work nano scaled LiNbO3 was prepared by the polyol method. The starting materials (niobium oxalate and LiOH) were diluted in H2O2. Then it was suspended in ethylene glycol and heated up to about the boiling point of the mixture with intensive stirring. After the thermal equilibrium was reached, the mixture was kept in this temperature for 4 hours. The suspension was cooled overnight. The mixture was centrifuged and the particles were filtered. Dynamic Light Scattering (DLS) measurement was carried out and the size of the particles were found to be 80-100 nms. This was confirmed by Scanning Electron Microscope (SEM) investigations. The element analysis of SEM showed large amount of Nb in the sample. The production of LiNbO3 nano particles were succesful by the polyol method. The agglomeration of the particles were avoided and the size of 80-100nm could be reached.

Keywords: lithium-niobate, nanoparticles, polyol, SEM

Procedia PDF Downloads 108
140 Forecasting of COVID-19 Cases, Hospitalization Admissions, and Death Cases Based on Wastewater Sars-COV-2 Surveillance Using Copula Time Series Model

Authors: Hueiwang Anna Jeng, Norou Diawara, Nancy Welch, Cynthia Jackson, Rekha Singh, Kyle Curtis, Raul Gonzalez, David Jurgens, Sasanka Adikari

Abstract:

Modeling effort is needed to predict the COVID-19 trends for developing management strategies and adaptation measures. The objective of this study was to assess whether SARS-CoV-2 viral load in wastewater could serve as a predictor for forecasting COVID-19 cases, hospitalization cases, and death cases using copula-based time series modeling. SARS-CoV-2 RNA load in raw wastewater in Chesapeake VA was measured using the RT-qPCR method. Gaussian copula time series marginal regression model, incorporating an autoregressive moving average model and the copula function, served as a forecasting model. COVID-19 cases were correlated with wastewater viral load, hospitalization cases, and death cases. The forecasted trend of COVID-19 cases closely paralleled one of the reported cases, with over 90% of the forecasted COVID-19 cases falling within the 99% confidence interval of the reported cases. Wastewater SARS-CoV-2 viral load could serve as a predictor for COVID-19 cases and hospitalization cases.

Keywords: COVID-19, modeling, time series, copula function

Procedia PDF Downloads 43