Search results for: Straightness Error
615 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences
Authors: Mahmoud M. S. Albattah
Abstract:
In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567614 Feature Subset Selection approach based on Maximizing Margin of Support Vector Classifier
Authors: Khin May Win, Nan Sai Moon Kham
Abstract:
Identification of cancer genes that might anticipate the clinical behaviors from different types of cancer disease is challenging due to the huge number of genes and small number of patients samples. The new method is being proposed based on supervised learning of classification like support vector machines (SVMs).A new solution is described by the introduction of the Maximized Margin (MM) in the subset criterion, which permits to get near the least generalization error rate. In class prediction problem, gene selection is essential to improve the accuracy and to identify genes for cancer disease. The performance of the new method was evaluated with real-world data experiment. It can give the better accuracy for classification.Keywords: Microarray data, feature selection, recursive featureelimination, support vector machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541613 Fifth Order Variable Step Block Backward Differentiation Formulae for Solving Stiff ODEs
Authors: S.A.M. Yatim, Z.B. Ibrahim, K.I. Othman, F. Ismail
Abstract:
The implicit block methods based on the backward differentiation formulae (BDF) for the solution of stiff initial value problems (IVPs) using variable step size is derived. We construct a variable step size block methods which will store all the coefficients of the method with a simplified strategy in controlling the step size with the intention of optimizing the performance in terms of precision and computation time. The strategy involves constant, halving or increasing the step size by 1.9 times the previous step size. Decision of changing the step size is determined by the local truncation error (LTE). Numerical results are provided to support the enhancement of method applied.Keywords: Backward differentiation formulae, block backwarddifferentiation formulae, stiff ordinary differential equation, variablestep size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257612 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model
Authors: Zina Benouaret, Djamil Aissani
Abstract:
In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888611 Low Complexity Regular LDPC codes for Magnetic Storage Devices
Authors: Gabofetswe Malema, Michael Liebelt
Abstract:
LDPC codes could be used in magnetic storage devices because of their better decoding performance compared to other error correction codes. However, their hardware implementation results in large and complex decoders. This one of the main obstacles the decoders to be incorporated in magnetic storage devices. We construct small high girth and rate 2 columnweight codes from cage graphs. Though these codes have low performance compared to higher column weight codes, they are easier to implement. The ease of implementation makes them more suitable for applications such as magnetic recording. Cages are the smallest known regular distance graphs, which give us the smallest known column-weight 2 codes given the size, girth and rate of the code.
Keywords: Structured LDPC codes, cage graphs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2112610 Mathematical Modelling of Single Phase Unity Power Factor Boost Converter
Authors: Sanjay L. Kurkute, Pradeep M. Patil, Kakasaheb C. Mohite
Abstract:
An optimal control strategy based on simple model, a single phase unity power factor boost converter is presented with an evaluation of first order differential equations. This paper presents an evaluation of single phase boost converter having power factor correction. The simple discrete model of boost converter is formed and optimal control is obtained, digital PI is adopted to adjust control error. The method of instantaneous current control is proposed in this paper for its good tracking performance of dynamic response. The simulation and experimental results verified our design.Keywords: Single phase, boost converter, Power factor correction (PFC), Pulse Width Modulation (PWM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3454609 Development of a Real Time Axial Force Measurement System and IoT-Based Monitoring for Smart Bearing
Authors: Hassam Ahmed, Yuanzhi Liu, Yassine Selami, Wei Tao, Hui Zhao
Abstract:
The purpose of this research is to develop a real time axial force measurement system for a smart bearing through the use of strain-gauges, whereby the data acquisition is performed by an Arduino microcontroller due to its easy manipulation and low-cost. The measured signal is acquired and then discretized using a Wheatstone Bridge and an Analog-Digital Converter (ADC) respectively. For bearing monitoring, a real time monitoring system based on Internet of things (IoT) and Bluetooth were developed. Experimental tests were performed on a bearing within a force range up to 600 kN. The experimental results show that there is a proportional linear relationship between the applied force and the output voltage, and the error R squared is within 0.9878 based on the regression analysis.
Keywords: Bearing, force measurement, IoT, strain gauge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681608 Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking
Authors: Siraa Ben Ftima, Mourad Talbi, Tahar Ezzedine
Abstract:
In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.Keywords: Color image, grayscale image, singular values decomposition, lifting wavelet transform, image watermarking, watermark, secure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1028607 Morphological Analysis of English L1-Persian L2 Adult Learners’ Interlanguage: From the Perspective of SLA Variation
Authors: Maassoumeh Bemani Naeini
Abstract:
Studies on interlanguage have long been engaged in describing the phenomenon of variation in SLA. Pursuing the same goal and particularly addressing the role of linguistic features, this study describes the use of Persian morphology in the interlanguage of two adult English-speaking learners of Persian L2. Taking the general approach of a combination of contrastive analysis, error analysis and interlanguage analysis, this study focuses on the identification and prediction of some possible instances of transfer from English L1 to Persian L2 across six elicitation tasks aiming to investigate whether any of contextual features may variably influence the learners’ order of morpheme accuracy in the areas of copula, possessives, articles, demonstratives, plural form, personal pronouns, and genitive cases. Results describe the existence of task variation in the interlanguage system of Persian L2 learners.Keywords: English L1, Interlanguage Analysis, Persian L2, SLA variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312606 Predictability of the Two Commonly Used Models to Represent the Thin-layer Re-wetting Characteristics of Barley
Authors: M. A. Basunia
Abstract:
Thirty three re-wetting tests were conducted at different combinations of temperatures (5.7- 46.30C) and relative humidites (48.2-88.6%) with barley. Two most commonly used thinlayer drying and rewetting models i.e. Page and Diffusion were compared for their ability to the fit the experimental re-wetting data based on the standard error of estimate (SEE) of the measured and simulated moisture contents. The comparison shows both the Page and Diffusion models fit the re-wetting experimental data of barley well. The average SEE values for the Page and Diffusion models were 0.176 % d.b. and 0.199 % d.b., respectively. The Page and Diffusion models were found to be most suitable equations, to describe the thin-layer re-wetting characteristics of barley over a typically five day re-wetting. These two models can be used for the simulation of deep-bed re-wetting of barley occurring during ventilated storage and deep bed drying.Keywords: Thin-layer, barley, re-wetting parameters, temperature, relative humidity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495605 The Performance Improvement of the Target Position Determining System in Laser Tracking Based on 4Q Detector using Neural Network
Authors: A. Salmanpour, Sh. Mohammad Nejad
Abstract:
One of the methods for detecting the target position error in the laser tracking systems is using Four Quadrant (4Q) detectors. If the coordinates of the target center is yielded through the usual relations of the detector outputs, the results will be nonlinear, dependent on the shape, target size and its position on the detector screen. In this paper we have designed an algorithm with using neural network that coordinates of the target center in laser tracking systems is calculated by using detector outputs obtained from visual modeling. With this method, the results except from the part related to the detector intrinsic limitation, are linear and dependent from the shape and target size.Keywords: four quadrant detector, laser tracking system, rangefinder, tracking sensor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207604 Application of BP Neural Network Model in Sports Aerobics Performance Evaluation
Authors: Shuhe Shao
Abstract:
This article provides partial evaluation index and its standard of sports aerobics, including the following 12 indexes: health vitality, coordination, flexibility, accuracy, pace, endurance, elasticity, self-confidence, form, control, uniformity and musicality. The three-layer BP artificial neural network model including input layer, hidden layer and output layer is established. The result shows that the model can well reflect the non-linear relationship between the performance of 12 indexes and the overall performance. The predicted value of each sample is very close to the true value, with a relative error fluctuating around of 5%, and the network training is successful. It shows that BP network has high prediction accuracy and good generalization capacity if being applied in sports aerobics performance evaluation after effective training.Keywords: BP neural network, sports aerobics, performance, evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618603 Application New Approach with Two Networks Slow and Fast on the Asynchronous Machine
Authors: Samia Salah, M’hamed Hadj Sadok, Abderrezak Guessoum
Abstract:
In this paper, we propose a new modular approach called neuroglial consisting of two neural networks slow and fast which emulates a biological reality recently discovered. The implementation is based on complex multi-time scale systems; validation is performed on the model of the asynchronous machine. We applied the geometric approach based on the Gerschgorin circles for the decoupling of fast and slow variables, and the method of singular perturbations for the development of reductions models.
This new architecture allows for smaller networks with less complexity and better performance in terms of mean square error and convergence than the single network model.
Keywords: Gerschgorin’s Circles, Neuroglial Network, Multi time scales systems, Singular perturbation method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605602 A Fuzzy Linear Regression Model Based on Dissemblance Index
Authors: Shih-Pin Chen, Shih-Syuan You
Abstract:
Fuzzy regression models are useful for investigating the relationship between explanatory variables and responses in fuzzy environments. To overcome the deficiencies of previous models and increase the explanatory power of fuzzy data, the graded mean integration (GMI) representation is applied to determine representative crisp regression coefficients. A fuzzy regression model is constructed based on the modified dissemblance index (MDI), which can precisely measure the actual total error. Compared with previous studies based on the proposed MDI and distance criterion, the results from commonly used test examples show that the proposed fuzzy linear regression model has higher explanatory power and forecasting accuracy.Keywords: Dissemblance index, fuzzy linear regression, graded mean integration, mathematical programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442601 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of external carbon sources. The present study investigated the feasibility of Anammox Hybrid Reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. Experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.
Keywords: Anammox, filter media, kinetics, nitrogen removal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551600 Target and Equalizer Design for Perpendicular Heat-Assisted Magnetic Recording
Authors: P. Tueku, P. Supnithi, R. Wongsathan
Abstract:
Heat-Assisted Magnetic Recording (HAMR) is one of the leading technologies identified to enable areal density beyond 1 Tb/in2 of magnetic recording systems. A key challenge to HAMR designing is accuracy of positioning, timing of the firing laser, power of the laser, thermo-magnetic head, head-disk interface and cooling system. We study the effect of HAMR parameters on transition center and transition width. The HAMR is model using Thermal Williams-Comstock (TWC) and microtrack model. The target and equalizer are designed by the minimum mean square error (MMSE). The result shows that the unit energy constraint outperforms other constraints.
Keywords: Heat-Assisted Magnetic Recording, Thermal Williams-Comstock equation, Microtrack model, Equalizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884599 Application of EEG Wavelet Power to Prediction of Antidepressant Treatment Response
Authors: Dorota Witkowska, Paweł Gosek, Lukasz Swiecicki, Wojciech Jernajczyk, Bruce J. West, Miroslaw Latka
Abstract:
In clinical practice, the selection of an antidepressant often degrades to lengthy trial-and-error. In this work we employ a normalized wavelet power of alpha waves as a biomarker of antidepressant treatment response. This novel EEG metric takes into account both non-stationarity and intersubject variability of alpha waves. We recorded resting, 19-channel EEG (closed eyes) in 22 inpatients suffering from unipolar (UD, n=10) or bipolar (BD, n=12) depression. The EEG measurement was done at the end of the short washout period which followed previously unsuccessful pharmacotherapy. The normalized alpha wavelet power of 11 responders was markedly different than that of 11 nonresponders at several, mostly temporoparietal sites. Using the prediction of treatment response based on the normalized alpha wavelet power, we achieved 81.8% sensitivity and 81.8% specificity for channel T4.
Keywords: Alpha waves, antidepressant, treatment outcome, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974598 The Interaction between Human and Environment on the Perspective of Environmental Ethics
Authors: Mella Ismelina Farma Rahayu
Abstract:
Environmental problems could not be separated from unethical human perspectives and behaviors toward the environment. There is a fundamental error in the philosophy of people’s perspective about human and nature and their relationship with the environment, which in turn will create an inappropriate behavior in relation to the environment. The aim of this study is to investigate and to understand the ethics of the environment in the context of humans interacting with the environment by using the hermeneutic approach. The related theories and concepts collected from literature review are used as data, which were analyzed by using interpretation, critical evaluation, internal coherence, comparisons, and heuristic techniques. As a result of this study, there will be a picture related to the interaction of human and environment in the perspective of environmental ethics, as well as the problems of the value of ecological justice in the interaction of humans and environment. We suggest that the interaction between humans and environment need to be based on environmental ethics, in a spirit of mutual respect between humans and the natural world.
Keywords: The environment, environmental ethics, the interaction, value.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598597 Comparison of Imputation Techniques for Efficient Prediction of Software Fault Proneness in Classes
Authors: Geeta Sikka, Arvinder Kaur Takkar, Moin Uddin
Abstract:
Missing data is a persistent problem in almost all areas of empirical research. The missing data must be treated very carefully, as data plays a fundamental role in every analysis. Improper treatment can distort the analysis or generate biased results. In this paper, we compare and contrast various imputation techniques on missing data sets and make an empirical evaluation of these methods so as to construct quality software models. Our empirical study is based on NASA-s two public dataset. KC4 and KC1. The actual data sets of 125 cases and 2107 cases respectively, without any missing values were considered. The data set is used to create Missing at Random (MAR) data Listwise Deletion(LD), Mean Substitution(MS), Interpolation, Regression with an error term and Expectation-Maximization (EM) approaches were used to compare the effects of the various techniques.Keywords: Missing data, Imputation, Missing Data Techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667596 Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
Authors: Abdullah A. AlShaher
Abstract:
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Keywords: Shape recognition, Arabic handwritten characters, regression curves, expectation maximization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713595 Wavelet-Based ECG Signal Analysis and Classification
Authors: Madina Hamiane, May Hashim Ali
Abstract:
This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm. The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.
Keywords: ECG Signal, QRS detection, thresholding, wavelet decomposition, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1273594 Neural Network Controller for Mobile Robot Motion Control
Authors: Jasmin Velagic, Nedim Osmic, Bakir Lacevic
Abstract:
In this paper the neural network-based controller is designed for motion control of a mobile robot. This paper treats the problems of trajectory following and posture stabilization of the mobile robot with nonholonomic constraints. For this purpose the recurrent neural network with one hidden layer is used. It learns relationship between linear velocities and error positions of the mobile robot. This neural network is trained on-line using the backpropagation optimization algorithm with an adaptive learning rate. The optimization algorithm is performed at each sample time to compute the optimal control inputs. The performance of the proposed system is investigated using a kinematic model of the mobile robot.Keywords: Mobile robot, kinematic model, neural network, motion control, adaptive learning rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3332593 Comanche – A Compiler-Driven I/O Management System
Authors: Wendy Zhang, Ernst L. Leiss, Huilin Ye
Abstract:
Most scientific programs have large input and output data sets that require out-of-core programming or use virtual memory management (VMM). Out-of-core programming is very error-prone and tedious; as a result, it is generally avoided. However, in many instance, VMM is not an effective approach because it often results in substantial performance reduction. In contrast, compiler driven I/O management will allow a program-s data sets to be retrieved in parts, called blocks or tiles. Comanche (COmpiler MANaged caCHE) is a compiler combined with a user level runtime system that can be used to replace standard VMM for out-of-core programs. We describe Comanche and demonstrate on a number of representative problems that it substantially out-performs VMM. Significantly our system does not require any special services from the operating system and does not require modification of the operating system kernel.Keywords: I/O Management, Out-of-core, Compiler, Tile mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318592 Motor Imagery Based Brain-Computer Interface for Cerebellar Impaired Patients
Authors: Young-Seok Choi
Abstract:
Cerebellar ataxia is a steadily progressive neurodegenerative disease associated with loss of motor control, leaving patients unable to walk, talk, or perform activities of daily living. Direct motor instruction in cerebella ataxia patients has limited effectiveness, presumably because an inappropriate closed-loop cerebellar response to the inevitable observed error confounds motor learning mechanisms. Could the use of EEG based BCI provide advanced biofeedback to improve motor imagery and provide a “backdoor” to improving motor performance in ataxia patients? In order to determine the feasibility of using EEG-based BCI control in this population, we compare the ability to modulate mu-band power (8-12 Hz) by performing a cued motor imagery task in an ataxia patient and healthy control.Keywords: Cerebellar ataxia, Electroencephalogram, brain-computer interface, motor imagery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750591 Numerical Studies of Galerkin-type Time-discretizations Applied to Transient Convection-diffusion-reaction Equations
Authors: Naveed Ahmed, Gunar Matthies
Abstract:
We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.
Keywords: Convection-diffusion-reaction equations, stabilized finite elements, discontinuous Galerkin, continuous Galerkin-Petrov.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750590 A novel Iterative Approach for Phase Noise Cancellation in Multi-Carrier Code Division Multiple Access (MC-CDMA) Systems
Authors: Joumana Farah, François Marx, Clovis Francis
Abstract:
The aim of this paper is to emphasize and alleviate the effect of phase noise due to imperfect local oscillators on the performances of a Multi-Carrier CDMA system. After the cancellation of Common Phase Error (CPE), an iterative approach is introduced which iteratively estimates Inter-Carrier Interference (ICI) components in the frequency domain and cancels their contribution in the time domain. Simulation are conducted in order to investigate the achievable performances for several parameters, such as the spreading factor, the modulation order, the phase noise power and the transmission Signal-to-Noise Ratio.
Keywords: Inter-carrier Interference, Multi-Carrier Code DivisionMultiple Access, Orthogonal Frequency Division Multiplexing, Phase noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555589 Effect of Iterative Algorithm on the Performance of MC-CDMA System with Nonlinear Models of HPA
Authors: R. Blicha
Abstract:
High Peak to Average Power Ratio (PAPR) of the transmitted signal is a serious problem in multicarrier systems (MC), such as Orthogonal Frequency Division Multiplexing (OFDM), or in Multi-Carrier Code Division Multiple Access (MC-CDMA) systems, due to large number of subcarriers. This effect is possible reduce with some PAPR reduction techniques. Spreading sequences at the presence of Saleh and Rapp models of high power amplifier (HPA) have big influence on the behavior of system. In this paper we investigate the bit-error-rate (BER) performance of MC-CDMA systems. Basically we can see from simulations that the MC-CDMA system with Iterative algorithm can be providing significantly better results than the MC-CDMA system. The results of our analyses are verified via simulation.
Keywords: MC-CDMA, Iterative algorithm, PAPR, BER, Saleh, Rapp, Spreading Sequences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2378588 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignement method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.
Keywords: 3D video, 3D animation, RGB-D video, Temporally Coherent 3D Animation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072587 Hopfield Network as Associative Memory with Multiple Reference Points
Authors: Domingo López-Rodríguez, Enrique Mérida-Casermeiro, Juan M. Ortiz-de-Lazcano-Lobato
Abstract:
Hopfield model of associative memory is studied in this work. In particular, two main problems that it possesses: the apparition of spurious patterns in the learning phase, implying the well-known effect of storing the opposite pattern, and the problem of its reduced capacity, meaning that it is not possible to store a great amount of patterns without increasing the error probability in the retrieving phase. In this paper, a method to avoid spurious patterns is presented and studied, and an explanation of the previously mentioned effect is given. Another technique to increase the capacity of a network is proposed here, based on the idea of using several reference points when storing patterns. It is studied in depth, and an explicit formula for the capacity of the network with this technique is provided.
Keywords: Associative memory, Hopfield network, network capacity, spurious patterns.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1108586 FAT based Adaptive Impedance Control for Unknown Environment Position
Authors: N. Z. Azlan, H. Yamaura
Abstract:
This paper presents the Function Approximation Technique (FAT) based adaptive impedance control for a robotic finger. The force based impedance control is developed so that the robotic finger tracks the desired force while following the reference position trajectory, under unknown environment position and uncertainties in finger parameters. The control strategy is divided into two phases, which are the free and contact phases. Force error feedback is utilized in updating the uncertain environment position during contact phase. Computer simulations results are presented to demonstrate the effectiveness of the proposed technique.Keywords: Adaptive impedance control, force based impedance control, force control, Function Approximation Technique (FAT), unknown environment position.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536