Search results for: band-pass filtering
277 Countercyclical Capital Buffer in the Polish Banking System
Authors: Mateusz Mokrogulski, Piotr Śliwka
Abstract:
The aim of this paper is the identification of periods of excessive credit growth in the Polish banking sector in years 2007-2014 using different methodologies. Due to the lack of precise guidance in CRD IV regarding methods of calculating the credit gap and related deviations from the long-term trends, a few filtering methods are applied, e.g. Hodrick-Prescott and Baxter-King. The solutions based on the switching model are also proposed. The next step represent computations of both the credit gap, and the counter cyclical capital buffer (CCB) rates on a quarterly basis. The calculations are carried out for the entire banking sector in Poland, as well as for its components (commercial and co-operative banks), and different types of loans. The calculations show vividly that in the analysed period there were the times of excessive credit growth. However, the results are different for the above mentioned sub-sectors. Of paramount importance here are mortgage loans, where the outcomes are distorted by high exchange rate fluctuations. The research on the CCB is now going to gain popularity as the buffer will soon become one of the tools of the macro prudential policy under CRD IV. Although the presented method is focused on the Polish banking sector, it can also be applied to other member states. Especially to the Central and Eastern European countries, that are usually characterized by smaller banking sectors compared to EU-15.Keywords: countercyclical capital buffer, CRD IV, filtering methods, mortgage loans
Procedia PDF Downloads 319276 FPGA Implementation of a Marginalized Particle Filter for Delineation of P and T Waves of ECG Signal
Authors: Jugal Bhandari, K. Hari Priya
Abstract:
The ECG signal provides important clinical information which could be used to pretend the diseases related to heart. Accordingly, delineation of ECG signal is an important task. Whereas delineation of P and T waves is a complex task. This paper deals with the Study of ECG signal and analysis of signal by means of Verilog Design of efficient filters and MATLAB tool effectively. It includes generation and simulation of ECG signal, by means of real time ECG data, ECG signal filtering and processing by analysis of different algorithms and techniques. In this paper, we design a basic particle filter which generates a dynamic model depending on the present and past input samples and then produces the desired output. Afterwards, the output will be processed by MATLAB to get the actual shape and accurate values of the ranges of P-wave and T-wave of ECG signal. In this paper, Questasim is a tool of mentor graphics which is being used for simulation and functional verification. The same design is again verified using Xilinx ISE which will be also used for synthesis, mapping and bit file generation. Xilinx FPGA board will be used for implementation of system. The final results of FPGA shall be verified with ChipScope Pro where the output data can be observed.Keywords: ECG, MATLAB, Bayesian filtering, particle filter, Verilog hardware descriptive language
Procedia PDF Downloads 366275 Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate
Authors: Neetu Manocha
Abstract:
Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR).Keywords: satellite image, image segmentation, edge detection, error rate, MSE, PSNR, HIST-SI, linking, filtering, imp_HIST-SI
Procedia PDF Downloads 137274 Analysis of Airborne Data Using Range Migration Algorithm for the Spotlight Mode of Synthetic Aperture Radar
Authors: Peter Joseph Basil Morris, Chhabi Nigam, S. Ramakrishnan, P. Radhakrishna
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data using the Range Migration Algorithm (RMA) for the spotlight mode of operation. Unlike in polar format algorithm (PFA), space-variant defocusing and geometric distortion effects are mitigated in RMA since it does not assume that the illuminating wave-fronts are planar. This facilitates the use of RMA for imaging scenarios involving severe differential range curvatures enabling the imaging of larger scenes at fine resolution and at shorter ranges with low center frequencies. The RMA algorithm for the spotlight mode of SAR is analyzed in this paper using the airborne data. Pre-processing operations viz: - range de-skew and motion compensation to a line are performed on the raw data before being fed to the RMA component. Various stages of the RMA viz:- 2D Matched Filtering, Along Track Fourier Transform and Slot Interpolation are analyzed to find the performance limits and the dependence of the imaging geometry on the resolution of the final image. The ability of RMA to compensate for severe differential range curvatures in the two-dimensional spatial frequency domain are also illustrated in this paper.Keywords: range migration algorithm, spotlight SAR, synthetic aperture radar, matched filtering, slot interpolation
Procedia PDF Downloads 241273 Is the Okun's Law Valid in Tunisia?
Authors: El Andari Chifaa, Bouaziz Rached
Abstract:
The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters
Procedia PDF Downloads 315272 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance
Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane
Abstract:
Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)
Procedia PDF Downloads 418271 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 135270 Methods for Restricting Unwanted Access on the Networks Using Firewall
Authors: Bhagwant Singh, Sikander Singh Cheema
Abstract:
This paper examines firewall mechanisms routinely implemented for network security in depth. A firewall can't protect you against all the hazards of unauthorized networks. Consequently, many kinds of infrastructure are employed to establish a secure network. Firewall strategies have already been the subject of significant analysis. This study's primary purpose is to avoid unnecessary connections by combining the capability of the firewall with the use of additional firewall mechanisms, which include packet filtering and NAT, VPNs, and backdoor solutions. There are insufficient studies on firewall potential and combined approaches, but there aren't many. The research team's goal is to build a safe network by integrating firewall strength and firewall methods. The study's findings indicate that the recommended concept can form a reliable network. This study examines the characteristics of network security and the primary danger, synthesizes existing domestic and foreign firewall technologies, and discusses the theories, benefits, and disadvantages of different firewalls. Through synthesis and comparison of various techniques, as well as an in-depth examination of the primary factors that affect firewall effectiveness, this study investigated firewall technology's current application in computer network security, then introduced a new technique named "tight coupling firewall." Eventually, the article discusses the current state of firewall technology as well as the direction in which it is developing.Keywords: firewall strategies, firewall potential, packet filtering, NAT, VPN, proxy services, firewall techniques
Procedia PDF Downloads 97269 A Recommender System for Dynamic Selection of Undergraduates' Elective Courses
Authors: Adewale O. Ogunde, Emmanuel O. Ajibade
Abstract:
The task of selecting a few elective courses from a variety of available elective courses has been a difficult one for many students over the years. In many higher institutions, guidance and counselors or level advisers are usually employed to assist the students in picking the right choice of courses. In reality, these counselors and advisers are most times overloaded with too many students to attend to, and sometimes they do not have enough time for the students. Most times, the academic strength of the student based on past results are not considered in the new choice of electives. Recommender systems implement advanced data analysis techniques to help users find the items of their interest by producing a predicted likeliness score or a list of top recommended items for a given active user. Therefore, in this work, a collaborative filtering-based recommender system that will dynamically recommend elective courses to undergraduate students based on their past grades in related courses was developed. This approach employed the use of the k-nearest neighbor algorithm to discover hidden relationships between the related courses passed by students in the past and the currently available elective courses. Real students’ results dataset was used to build and test the recommendation model. The developed system will not only improve the academic performance of students, but it will also help reduce the workload on the level advisers and school counselors.Keywords: collaborative filtering, elective courses, k-nearest neighbor algorithm, recommender systems
Procedia PDF Downloads 162268 UWB Channel Estimation Using an Efficient Sub-Nyquist Sampling Scheme
Authors: Yaacoub Tina, Youssef Roua, Radoi Emanuel, Burel Gilles
Abstract:
Recently, low-complexity sub-Nyquist sampling schemes based on the Finite Rate of Innovation (FRI) theory have been introduced to sample parametric signals at minimum rates. The multichannel modulating waveforms (MCMW) is such an efficient scheme, where the received signal is mixed with an appropriate set of arbitrary waveforms, integrated and sampled at rates far below the Nyquist rate. In this paper, the MCMW scheme is adapted to the special case of ultra wideband (UWB) channel estimation, characterized by dense multipaths. First, an appropriate structure, which accounts for the bandpass spectrum feature of UWB signals, is defined. Then, a novel approach to decrease the number of processing channels and reduce the complexity of this sampling scheme is presented. Finally, the proposed concepts are validated by simulation results, obtained with real filters, in the framework of a coherent Rake receiver.Keywords: coherent rake receiver, finite rate of innovation, sub-nyquist sampling, ultra wideband
Procedia PDF Downloads 255267 Filtering Intrusion Detection Alarms Using Ant Clustering Approach
Authors: Ghodhbani Salah, Jemili Farah
Abstract:
With the growth of cyber attacks, information safety has become an important issue all over the world. Many firms rely on security technologies such as intrusion detection systems (IDSs) to manage information technology security risks. IDSs are considered to be the last line of defense to secure a network and play a very important role in detecting large number of attacks. However the main problem with today’s most popular commercial IDSs is generating high volume of alerts and huge number of false positives. This drawback has become the main motivation for many research papers in IDS area. Hence, in this paper we present a data mining technique to assist network administrators to analyze and reduce false positive alarms that are produced by an IDS and increase detection accuracy. Our data mining technique is unsupervised clustering method based on hybrid ANT algorithm. This algorithm discovers clusters of intruders’ behavior without prior knowledge of a possible number of classes, then we apply K-means algorithm to improve the convergence of the ANT clustering. Experimental results on real dataset show that our proposed approach is efficient with high detection rate and low false alarm rate.Keywords: intrusion detection system, alarm filtering, ANT class, ant clustering, intruders’ behaviors, false alarms
Procedia PDF Downloads 402266 A Combination of Anisotropic Diffusion and Sobel Operator to Enhance the Performance of the Morphological Component Analysis for Automatic Crack Detection
Authors: Ankur Dixit, Hiroaki Wagatsuma
Abstract:
The crack detection on a concrete bridge is an important and constant task in civil engineering. Chronically, humans are checking the bridge for inspection of cracks to maintain the quality and reliability of bridge. But this process is very long and costly. To overcome such limitations, we have used a drone with a digital camera, which took some images of bridge deck and these images are processed by morphological component analysis (MCA). MCA technique is a very strong application of sparse coding and it explores the possibility of separation of images. In this paper, MCA has been used to decompose the image into coarse and fine components with the effectiveness of two dictionaries namely anisotropic diffusion and wavelet transform. An anisotropic diffusion is an adaptive smoothing process used to adjust diffusion coefficient by finding gray level and gradient as features. These cracks in image are enhanced by subtracting the diffused coarse image into the original image and the results are treated by Sobel edge detector and binary filtering to exhibit the cracks in a fine way. Our results demonstrated that proposed MCA framework using anisotropic diffusion followed by Sobel operator and binary filtering may contribute to an automation of crack detection even in open field sever conditions such as bridge decks.Keywords: anisotropic diffusion, coarse component, fine component, MCA, Sobel edge detector and wavelet transform
Procedia PDF Downloads 172265 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm
Authors: Ping Bo, Meng Yunshan
Abstract:
Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter
Procedia PDF Downloads 323264 Half Mode Substrate Integrated Wave Guide of Band Pass Filter Based to Defected Ground Structure Cells
Authors: Damou Mehdi, Nouri Keltoum, Feham Mohammed, Khazini Mohammed, Bouazza Tayb Habibi Chawki
Abstract:
The Half mode SIW filter is treated by two softwares (HFSS (High Frequency Structure Simulator) and CST (Computer Simulation Technology)). The filter HMSIW has a very simple structure and a very compact size. The simulated results by CST are presented and compared with the results simulated by a high-frequency structure simulator. Good agreement between the simulated CST and simulated results by HFSS is observed. By cascading two of them according to design requirement, a X-band bandpass filter is designed and simulated to meet compact size, low insertion loss, good return loss as well as second harmonic suppression. As an example, we designed the proposed HMSIW filter at X band by HFSS. The filter has a pass-band from 7.3 GHz to 9.8 GHz, and its relative operating fraction bandwidth is 29.5 %. There are one transmission zeros are located at 14.4 GHz.Keywords: substrate integrated waveguide, filter, HMSIW, defected ground structures (DGS), simulation BPF
Procedia PDF Downloads 585263 Towards the Enhancement of Thermoelectric Properties by Controlling the Thermoelectrical Nature of Grain Boundaries in Polycrystalline Materials
Authors: Angel Fabian Mijangos, Jaime Alvarez Quintana
Abstract:
Waste heat occurs in many areas of daily life because world’s energy consumption is inefficient. In general, generating 1 watt of power requires about 3 watt of energy input and involves dumping into the environment the equivalent of about 2 watts of power in the form of heat. Therefore, an attractive and sustainable solution to the energy problem would be the development of highly efficient thermoelectric devices which could help to recover this waste heat. This work presents the influence on the thermoelectric properties of metallic, semiconducting, and dielectric nanoparticles added into the grain boundaries of polycrystalline antimony (Sb) and bismuth (Bi) matrixes in order to obtain p- and n-type thermoelectric materials, respectively, by hot pressing methods. Results show that thermoelectric properties are significantly affected by the electrical and thermal nature as well as concentration of nanoparticles. Nevertheless, by optimizing the amount of the nanoparticles on the grain boundaries, an oscillatory behavior in ZT as function of the concentration of the nanoscale constituents is present. This effect is due to energy filtering mechanism which module the quantity of charge transport in the system and affects thermoelectric properties. Accordingly, a ZTmax can be accomplished through the addition of the appropriate amount of nanoparticles into the grain boundaries region. In this case, till three orders of amelioration on ZT is reached in both systems compared with the reference sample of each one. This approach paves the way to pursuit high performance thermoelectric materials in a simple way and opens a new route towards the enhancement of the thermoelectric figure of merit.Keywords: energy filtering, grain boundaries, thermoelectric, nanostructured materials
Procedia PDF Downloads 253262 A Tunable Long-Cavity Passive Mode-Locked Fiber Laser Based on Nonlinear Amplifier Loop Mirror
Authors: Pinghe Wang
Abstract:
In this paper, we demonstrate a tunable long-cavity passive mode-locked fiber laser. The mode locker is a nonlinear amplifying loop mirror (NALM). The cavity frequency of the laser is 465 kHz because that 404m SMF is inserted in the cavity. A tunable bandpass filter with ~1nm 3dB bandwidth is inserted into the cavity to realize tunable mode locking. The passive mode-locked laser at a fixed wavelength is investigated in detail. The experimental results indicate that the laser operates in dissipative soliton resonance (DSR) region. When the pump power is 400mW, the laser generates the rectangular pulses with 10.58 ns pulse duration, 70.28nJ single-pulse energy. When the pump power is 400mW, the laser keeps stable mode locking status in the range from 1523.4nm to 1575nm. During the whole tuning range, the SNR, the pulse duration, the output power and single pulse energy have a little fluctuation because that the gain of the EDF changes with the wavelength.Keywords: fiber laser, dissipative soliton resonance, mode locking, tunable
Procedia PDF Downloads 236261 Pseudo Modal Operating Deflection Shape Based Estimation Technique of Mode Shape Using Time History Modal Assurance Criterion
Authors: Doyoung Kim, Hyo Seon Park
Abstract:
Studies of System Identification(SI) based on Structural Health Monitoring(SHM) have actively conducted for structural safety. Recently SI techniques have been rapidly developed with output-only SI paradigm for estimating modal parameters. The features of these output-only SI methods consist of Frequency Domain Decomposition(FDD) and Stochastic Subspace Identification(SSI) are using the algorithms based on orthogonal decomposition such as singular value decomposition(SVD). But the SVD leads to high level of computational complexity to estimate modal parameters. This paper proposes the technique to estimate mode shape with lower computational cost. This technique shows pseudo modal Operating Deflections Shape(ODS) through bandpass filter and suggests time history Modal Assurance Criterion(MAC). Finally, mode shape could be estimated from pseudo modal ODS and time history MAC. Analytical simulations of vibration measurement were performed and the results with mode shape and computation time between representative SI method and proposed method were compared.Keywords: modal assurance criterion, mode shape, operating deflection shape, system identification
Procedia PDF Downloads 407260 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering
Abstract:
Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.Keywords: carbon composite, fault detection, fault identification, particle filter
Procedia PDF Downloads 193259 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 279258 A Sub-Conjunctiva Injection of Rosiglitazone for Anti-Fibrosis Treatment after Glaucoma Filtration Surgery
Authors: Yang Zhao, Feng Zhang, Xuanchu Duan
Abstract:
Trans-differentiation of human Tenon fibroblasts (HTFs) to myo-fibroblasts and fibrosis of episcleral tissue are the most common reasons for the failure of glaucoma filtration surgery, with limited treatment options like antimetabolites which always have side-effects such as leakage of filter bulb, infection, hypotony, and endophthalmitis. Rosiglitazone, a specific thiazolidinedione is a synthetic high-affinity ligand for PPAR-r, which has been used in the treatment of type2 diabetes, and found to have pleiotropic functions against inflammatory response, cell proliferation and tissue fibrosis and to benefit to a variety of diseases in animal myocardium models, steatohepatitis models, etc. Here, in vitro we cultured primary HTFs and stimulated with TGF- β to induced myofibrogenic, then treated cells with Rosiglitazone to assess for fibrogenic response. In vivo, we used rabbit glaucoma model to establish the formation of post- trabeculectomy scarring. Then we administered subconjunctival injection with Rosiglitazone beside the filtering bleb, later protein, mRNA and immunofluorescence of fibrogenic markers are checked, and filtering bleb condition was measured. In vitro, we found Rosiglitazone could suppressed proliferation and migration of fibroblasts through macroautophagy via TGF- β /Smad signaling pathway. In vivo, on postoperative day 28, the mean number of fibroblasts in Rosiglitazone injection group was significantly the lowest and had the least collagen content and connective tissue growth factor. Rosiglitazone effectively controlled human and rabbit fibroblasts in vivo and in vitro. Its subconjunctiiva application may represent an effective, new avenue for the prevention of scarring after glaucoma surgery.Keywords: fibrosis, glaucoma, macroautophagy, rosiglitazone
Procedia PDF Downloads 270257 Recommender Systems for Technology Enhanced Learning (TEL)
Authors: Hailah Alballaa, Azeddine Chikh
Abstract:
Several challenges impede the adoption of Recommender Systems for Technology Enhanced Learning (TEL): to collect and identify possible datasets; to select between different recommender approaches; to evaluate their performances. The aim is of this paper is twofold: First, it aims to introduce a survey on the most significant work in this area. Second, it aims at identifying possible research directions.Keywords: datasets, content-based filtering, recommender systems, TEL
Procedia PDF Downloads 243256 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics
Procedia PDF Downloads 248255 Analysis of Filtering in Stochastic Systems on Continuous- Time Memory Observations in the Presence of Anomalous Noises
Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov
Abstract:
For optimal unbiased filter as mean-square and in the case of functioning anomalous noises in the observation memory channel, we have proved insensitivity of filter to inaccurate knowledge of the anomalous noise intensity matrix and its equivalence to truncated filter plotted only by non anomalous components of an observation vector.Keywords: mathematical expectation, filtration, anomalous noise, memory
Procedia PDF Downloads 360254 Performance Evaluation of GPS/INS Main Integration Approach
Authors: Othman Maklouf, Ahmed Adwaib
Abstract:
This paper introduces a comparative study between the main GPS/INS coupling schemes, this will include the loosely coupled and tightly coupled configurations, several types of situations and operational conditions, in which the data fusion process is done using Kalman filtering. This will include the importance of sensors calibration as well as the alignment of the strap down inertial navigation system. The limitations of the inertial navigation systems are investigated.Keywords: GPS, INS, Kalman filter, sensor calibration, navigation system
Procedia PDF Downloads 588253 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 397252 The Study of Heat and Mass Transfer for Ferrous Materials' Filtration Drying
Authors: Dmytro Symak
Abstract:
Drying is a complex technologic, thermal and energy process. Energy cost of drying processes in many cases is the most costly stage of production, and can be over 50% of total costs. As we know, in Ukraine over 85% of Portland cement is produced moist, and the finished product energy costs make up to almost 60%. During the wet cement production, energy costs make up over 5500 kJ / kg of clinker, while during the dry only 3100 kJ / kg, that is, switching to a dry Portland cement will allow result into double cutting energy costs. Therefore, to study raw materials drying process in the manufacture of Portland cement is very actual task. The fine ferrous materials drying (small pyrites, red mud, clay Kyoko) is recommended to do by filtration method, that is one of the most intense. The essence of filtration method drying lies in heat agent filtering through a stationary layer of wet material, which is located on the perforated partition, in the "layer-dispersed material - perforated partition." For the optimum drying purposes, it is necessary to establish the dependence of pressure loss in the layer of dispersed material, and the values of heat and mass transfer, depending on the speed of the gas flow filtering. In our research, the experimentally determined pressure loss in the layer of dispersed material was generalized based on dimensionless complexes in the form and coefficients of heat exchange. We also determined the relation between the coefficients of mass and heat transfer. As a result of theoretic and experimental investigations, it was possible to develop a methodology for calculating the optimal parameters for the thermal agent and the main parameters for the filtration drying installation. The comparison of calculated by known operating expenses methods for the process of small pyrites drying in a rotating drum and filtration method shows to save up to 618 kWh per 1,000 kg of dry material and 700 kWh during filtration drying clay.Keywords: drying, cement, heat and mass transfer, filtration method
Procedia PDF Downloads 260251 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 205250 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: image processing, illumination equalization, shadow filtering, object detection
Procedia PDF Downloads 214249 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 129248 Chemical Life Cycle Alternative Assessment as a Green Chemical Substitution Framework: A Feasibility Study
Authors: Sami Ayad, Mengshan Lee
Abstract:
The Sustainable Development Goals (SDGs) were designed to be the best possible blueprint to achieve peace, prosperity, and overall, a better and more sustainable future for the Earth and all its people, and such a blueprint is needed more than ever. The SDGs face many hurdles that will prevent them from becoming a reality, one of such hurdles, arguably, is the chemical pollution and unintended chemical impacts generated through the production of various goods and resources that we consume. Chemical Alternatives Assessment has proven to be a viable solution for chemical pollution management in terms of filtering out hazardous chemicals for a greener alternative. However, the current substitution practice lacks crucial quantitative datasets (exposures and life cycle impacts) to ensure no unintended trade-offs occur in the substitution process. A Chemical Life Cycle Alternative Assessment (CLiCAA) framework is proposed as a reliable and replicable alternative to Life Cycle Based Alternative Assessment (LCAA) as it integrates chemical molecular structure analysis and Chemical Life Cycle Collaborative (CLiCC) web-based tool to fill in data gaps that the former frameworks suffer from. The CLiCAA framework consists of a four filtering layers, the first two being mandatory, with the final two being optional assessment and data extrapolation steps. Each layer includes relevant impact categories of each chemical, ranging from human to environmental impacts, that will be assessed and aggregated into unique scores for overall comparable results, with little to no data. A feasibility study will demonstrate the efficiency and accuracy of CLiCAA whilst bridging both cancer potency and exposure limit data, hoping to provide the necessary categorical impact information for every firm possible, especially those disadvantaged in terms of research and resource management.Keywords: chemical alternative assessment, LCA, LCAA, CLiCC, CLiCAA, chemical substitution framework, cancer potency data, chemical molecular structure analysis
Procedia PDF Downloads 90