Search results for: average time to signal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22218

Search results for: average time to signal

22008 Stray Light Reduction Methodology by a Sinusoidal Light Modulation and Three-Parameter Sine Curve Fitting Algorithm for a Reflectance Spectrometer

Authors: Hung Chih Hsieh, Cheng Hao Chang, Yun Hsiang Chang, Yu Lin Chang

Abstract:

In the applications of the spectrometer, the stray light that comes from the environment affects the measurement results a lot. Hence, environment and instrument quality control for the stray reduction is critical for the spectral reflectance measurement. In this paper, a simple and practical method has been developed to correct a spectrometer's response for measurement errors arising from the environment's and instrument's stray light. A sinusoidal modulated light intensity signal was incident on a tested sample, and then the reflected light was collected by the spectrometer. Since a sinusoidal signal modulated the incident light, the reflected light also had a modulated frequency which was the same as the incident signal. Using the three-parameter sine curve fitting algorithm, we can extract the primary reflectance signal from the total measured signal, which contained the primary reflectance signal and the stray light from the environment. The spectra similarity between the extracted spectra by this proposed method with extreme environment stray light is 99.98% similar to the spectra without the environment's stray light. This result shows that we can measure the reflectance spectra without the affection of the environment's stray light.

Keywords: spectrometer, stray light, three-parameter sine curve fitting, spectra extraction

Procedia PDF Downloads 227
22007 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 255
22006 Denoising of Motor Unit Action Potential Based on Tunable Band-Pass Filter

Authors: Khalida S. Rijab, Mohammed E. Safi, Ayad A. Ibrahim

Abstract:

When electrical electrodes are mounted on the skin surface of the muscle, a signal is detected when a skeletal muscle undergoes contraction; the signal is known as surface electromyographic signal (EMG). This signal has a noise-like interference pattern resulting from the temporal and spatial summation of action potentials (AP) of all active motor units (MU) near electrode detection. By appropriate processing (Decomposition), the surface EMG signal may be used to give an estimate of motor unit action potential. In this work, a denoising technique is applied to the MUAP signals extracted from the spatial filter (IB2). A set of signals from a non-invasive two-dimensional grid of 16 electrodes from different types of subjects, muscles, and sex are recorded. These signals will acquire noise during recording and detection. A digital fourth order band- pass Butterworth filter is used for denoising, with a tuned band-pass frequency of suitable choice of cutoff frequencies is investigated, with the aim of obtaining a suitable band pass frequency. Results show an improvement of (1-3 dB) in the signal to noise ratio (SNR) have been achieved, relative to the raw spatial filter output signals for all cases that were under investigation. Furthermore, the research’s goal included also estimation and reconstruction of the mean shape of the MUAP.

Keywords: EMG, Motor Unit, Digital Filter, Denoising

Procedia PDF Downloads 392
22005 A Comparative Analysis on QRS Peak Detection Using BIOPAC and MATLAB Software

Authors: Chandra Mukherjee

Abstract:

The present paper is a representation of the work done in the field of ECG signal analysis using MATLAB 7.1 Platform. An accurate and simple ECG feature extraction algorithm is presented in this paper and developed algorithm is validated using BIOPAC software. To detect the QRS peak, ECG signal is processed by following mentioned stages- First Derivative, Second Derivative and then squaring of that second derivative. Efficiency of developed algorithm is tested on ECG samples from different database and real time ECG signals acquired using BIOPAC system. Firstly we have lead wise specified threshold value the samples above that value is marked and in the original signal, where these marked samples face change of slope are spotted as R-peak. On the left and right side of the R-peak, faces change of slope identified as Q and S peak, respectively. Now the inbuilt Detection algorithm of BIOPAC software is performed on same output sample and both outputs are compared. ECG baseline modulation correction is done after detecting characteristics points. The efficiency of the algorithm is tested using some validation parameters like Sensitivity, Positive Predictivity and we got satisfied value of these parameters.

Keywords: first derivative, variable threshold, slope reversal, baseline modulation correction

Procedia PDF Downloads 400
22004 Cooperative Sensing for Wireless Sensor Networks

Authors: Julien Romieux, Fabio Verdicchio

Abstract:

Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.

Keywords: cooperative signal processing, signal representation and approximation, power management, wireless sensor networks

Procedia PDF Downloads 378
22003 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 126
22002 Identification of the Relationship Between Signals in Continuous Monitoring of Production Systems

Authors: Maciej Zaręba, Sławomir Lasota

Abstract:

Understanding the dependencies between the input signal, that controls the production system and signals, that capture its output, is of a great importance in intelligent systems. The method for identification of the relationship between signals in continuous monitoring of production systems is described in the paper. The method discovers the correlation between changes in the states derived from input signals and resulting changes in the states of output signals of the production system. The method is able to handle system inertia, which determines the time shift of the relationship between the input and output.

Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems

Procedia PDF Downloads 82
22001 Coding and Decoding versus Space Diversity for ‎Rayleigh Fading Radio Frequency Channels ‎

Authors: Ahmed Mahmoud Ahmed Abouelmagd

Abstract:

The diversity is the usual remedy of the transmitted signal level variations (Fading phenomena) in radio frequency channels. Diversity techniques utilize two or more copies of a signal and combine those signals to combat fading. The basic concept of diversity is to transmit the signal via several independent diversity branches to get independent signal replicas via time – frequency - space - and polarization diversity domains. Coding and decoding processes can be an alternative remedy for fading phenomena, it cannot increase the channel capacity, but it can improve the error performance. In this paper we propose the use of replication decoding with BCH code class, and Viterbi decoding algorithm with convolution coding; as examples of coding and decoding processes. The results are compared to those obtained from two optimized selection space diversity techniques. The performance of Rayleigh fading channel, as the model considered for radio frequency channels, is evaluated for each case. The evaluation results show that the coding and decoding approaches, especially the BCH coding approach with replication decoding scheme, give better performance compared to that of selection space diversity optimization approaches. Also, an approach for combining the coding and decoding diversity as well as the space diversity is considered, the main disadvantage of this approach is its complexity but it yields good performance results.

Keywords: Rayleigh fading, diversity, BCH codes, Replication decoding, ‎convolution coding, viterbi decoding, space diversity

Procedia PDF Downloads 427
22000 Dysfunctional Behavior of External Auditors, The Collision of Time Budget and Time Deadline

Authors: Rabih Nehme, Abdullah Al Mutawa

Abstract:

The general goal behind this research is to gain a better understanding of factors leading to dysfunctional behavior of auditors. Recent accounting scandals -Enron, Waste Management Inc., WorldCom, Xerox Corporation, etc. -provided an ample proof of how the role of auditors has become the basis of controversial debates in many circles and instances in our modern time. The majority of lawsuits and accounting scandals seem to have a central topic in focus, namely the question ''Where were the auditors? The survey we offer up for research is made up of 34 questions that are designed to analyse the perception of auditors and the cause of dysfunctional behavior. The object of this research is comprised of auditors positioned and employed at the Big Four audit firms in Kuwait. Dysfunctional behavior (DB) is measured against two signal proxies of dysfunctional behavior; premature sign-off and under reporting of chargeable time. DB is analysed against time budget pressure and time deadline pressure. The research results' suggest that the general belief among auditors is that the profession of accountancy predetermines their tendency to commit certain patterns of dysfunctional behavior. Having our investigation conducted at the Big Four audit firms, we have come to the conclusion that there is a general difference in behavior patterns among perceptions of dysfunctional behavior and normal skeptic professional behavior.

Keywords: big four, dysfunctional behavior, time budget, time deadline

Procedia PDF Downloads 451
21999 Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function

Authors: Rogelio Luck, Yucheng Liu

Abstract:

This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions e⁻⁽ᵗ⁻ ᵀ⁾, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.

Keywords: singular value decomposition, impulse response function, Green’s function , Toeplitz matrix , Hankel matrix

Procedia PDF Downloads 144
21998 Increased Reaction and Movement Times When Text Messaging during Simulated Driving

Authors: Adriana M. Duquette, Derek P. Bornath

Abstract:

Reaction Time (RT) and Movement Time (MT) are important components of everyday life that have an effect on the way in which we move about our environment. These measures become even more crucial when an event can be caused (or avoided) in a fraction of a second, such as the RT and MT required while driving. The purpose of this study was to develop a more simple method of testing RT and MT during simulated driving with or without text messaging, in a university-aged population (n = 170). In the control condition, a randomly-delayed red light stimulus flashed on a computer interface after the participant began pressing the ‘gas’ pedal on a foot switch mat. Simple RT was defined as the time between the presentation of the light stimulus and the initiation of lifting the foot from the switch mat ‘gas’ pedal; while MT was defined as the time after the initiation of lifting the foot, to the initiation of depressing the switch mat ‘brake’ pedal. In the texting condition, upon pressing the ‘gas’ pedal, a ‘text message’ appeared on the computer interface in a dialog box that the participant typed on their cell phone while waiting for the light stimulus to turn red. In both conditions, the sequence was repeated 10 times, and an average RT (seconds) and average MT (seconds) were recorded. Condition significantly (p = .000) impacted overall RTs, as the texting condition (0.47 s) took longer than the no-texting (control) condition (0.34 s). Longer MTs were also recorded during the texting condition (0.28 s) than in the control condition (0.23 s), p = .001. Overall increases in Response Time (RT + MT) of 189 ms during the texting condition would equate to an additional 4.2 meters (to react to the stimulus and begin braking) if the participant had been driving an automobile at 80 km per hour. In conclusion, increasing task complexity due to the dual-task demand of text messaging during simulated driving caused significant increases in RT (41%), MT (23%) and Response Time (34%), thus further strengthening the mounting evidence against text messaging while driving.

Keywords: simulated driving, text messaging, reaction time, movement time

Procedia PDF Downloads 515
21997 Maximizing Bidirectional Green Waves for Major Road Axes

Authors: Christian Liebchen

Abstract:

Both from an environmental perspective and with respect to road traffic flow quality, planning so-called green waves along major road axes is a well-established target for traffic engineers. For one-way road axes (e.g. the Avenues in Manhattan), this is a trivial downstream task. For bidirectional arterials, the well-known necessary condition for establishing a green wave in both directions is that the driving times between two subsequent crossings must be an integer multiple of half of the cycle time of the signal programs at the nodes. In this paper, we propose an integer linear optimization model to establish fixed-time green waves in both directions that are as long and as wide as possible, even in the situation where the driving time condition is not fulfilled. In particular, we are considering an arterial along whose nodes separate left-turn signal groups are realized. In our computational results, we show that scheduling left-turn phases before or after the straight phases can reduce waiting times along the arterial. Moreover, we show that there is always a solution with green waves in both directions that are as long and as wide as possible, where absolute priority is put on just one direction. Compared to optimizing both directions together, establishing an ideal green wave into one direction can only provide suboptimal quality when considering prioritized parts of a green band (e.g., first few seconds).

Keywords: traffic light coordination, synchronization, phase sequencing, green waves, integer programming

Procedia PDF Downloads 102
21996 System Identification of Timber Masonry Walls Using Shaking Table Test

Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi

Abstract:

Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.

Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition

Procedia PDF Downloads 254
21995 Performance Analysis of the First-Order Characteristics of Polling System Based on Parallel Limited (K=1) Services Mode

Authors: Liu Yi, Bao Liyong

Abstract:

Aiming at the problem of low efficiency of pipelined scheduling in periodic query-qualified service, this paper proposes a system service resource scheduling strategy with parallel optimized qualified service polling control. The paper constructs the polling queuing system and its mathematical model; firstly, the first-order and second-order characteristic parameter equations are obtained by partial derivation of the probability mother function of the system state variables, and the complete mathematical, analytical expressions of each system parameter are deduced after the joint solution. The simulation experimental results are consistent with the theoretical calculated values. The system performance analysis shows that the average captain and average period of the system have been greatly improved, which can better adapt to the service demand of delay-sensitive data in the dense data environment.

Keywords: polling, parallel scheduling, mean queue length, average cycle time

Procedia PDF Downloads 24
21994 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 113
21993 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 434
21992 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea

Authors: K. S. Sreejith, C. Shaji

Abstract:

Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.

Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis

Procedia PDF Downloads 268
21991 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction

Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar

Abstract:

In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.

Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG

Procedia PDF Downloads 396
21990 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 453
21989 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 396
21988 A Comparative Analysis of Grade Weighted Average and Comprehensive Examination Result of Non Board Passers and Board Passers

Authors: Rob Gesley Capistrano, Jasper James Isaac, Rose Mae Moralda, Therese Anne Peleo, Danica Rillo, Maria Virginia Santillian

Abstract:

One of the valuable things that shows the intelligence among individuals is the academic background specifically their Grade Weighted Average and the significant result of the Comprehensive Examination. The general objective of the researchers to this study is to determine if there is a significant difference between General Weighted Average and Comprehensive Examination Result of Psychometrician Board Passers and Non-Board Passers. The respondents of this study composed of board passers and non-board passers. The researchers used purposive sampling technique. The result utilized by using T-test Independent Sample to determine the comparison of General Weighted Average and Comprehensive Examination Result of Board Passers and Non Board Passers. At the end, it concluded that the General Weighted Average of Board Passers and Non-Board Passers shows that there is no significant difference, but the average showed a minimal variation. The Comprehensive Examination Result of Board Passers and Non-Board Passers result revealed that there is a significant difference. The performance of comprehensive examination that will test the overall knowledge of an individual and will determine whose more proficient will likely to have a higher score. The result of the comprehensive examination had an impact in the passing performance of board examination.

Keywords: board passers, comprehensive examination result, grade weighted average, non board passers

Procedia PDF Downloads 168
21987 2D Point Clouds Features from Radar for Helicopter Classification

Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres

Abstract:

This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.

Keywords: helicopter classification, point clouds features, radar, supervised classifiers

Procedia PDF Downloads 208
21986 Determining Coordinates of Ultra-Light Drones Based on the Time Difference of Arrival (TDOA) Method

Authors: Nguyen Huy Hoang, Do Thanh Quan, Tran Vu Kien

Abstract:

The use of the active radar to measure the coordinates of ultra-light drones is frequently difficult due to long-distance, absolutely small radar cross-section (RCS) and obstacles. Since ultra-light drones are usually controlled by the Time Difference of Arrival (RF), the paper proposed a method to measure the coordinates of ultra-light drones in the space based on the arrival time of the signal at receiving antennas and the time difference of arrival (TDOA). The experimental results demonstrate that the proposed method is really potential and highly accurate.

Keywords: ultra-light drone, TDOA, radar cross-section (RCS), RF

Procedia PDF Downloads 186
21985 Time-Series Load Data Analysis for User Power Profiling

Authors: Mahdi Daghmhehci Firoozjaei, Minchang Kim, Dima Alhadidi

Abstract:

In this paper, we present a power profiling model for smart grid consumers based on real time load data acquired smart meters. It profiles consumers’ power consumption behaviour using the dynamic time warping (DTW) clustering algorithm. Due to the invariability of signal warping of this algorithm, time-disordered load data can be profiled and consumption features be extracted. Two load types are defined and the related load patterns are extracted for classifying consumption behaviour by DTW. The classification methodology is discussed in detail. To evaluate the performance of the method, we analyze the time-series load data measured by a smart meter in a real case. The results verify the effectiveness of the proposed profiling method with 90.91% true positive rate for load type clustering in the best case.

Keywords: power profiling, user privacy, dynamic time warping, smart grid

Procedia PDF Downloads 133
21984 The Effect of Cow Reproductive Traits on Lifetime Productivity and Longevity

Authors: Lāsma Cielava, Daina Jonkus, Līga Paura

Abstract:

The age of first calving (AFC) is one of the most important factors that have a significant impact on cow productivity in different lactations and its whole life. A belated AFC leads to reduced reproductive performance and it is one of the main reasons for reduced longevity. Cows that calved in time period from 2001-2007 and in this time finished at least four lactations were included in the database. Data were obtained from 68841 crossbred Holstein Black and White (HM), crossbred Latvian Brown (LB), and Latvian Brown genetic resources (LBGR) cows. Cows were distributed in four groups depending on age at first calving. The longest lifespan was conducted for LBGR cows, but they were also characterized with lowest lifetime milk yield and life day milk yield. HM breed cows had the shortest lifespan, but in the lifespan of 2862.2 days was obtained in average 37916.4 kg milk accordingly 13.2 kg milk in one life day. HM breed cows were also characterized with longer calving intervals (CI) in first four lactations, but LBGR cows had the shortest CI in the study group. Age at first calving significantly affected the length of CI in different lactations (p<0.05). HM cows that first time calved >30 months old in the fourth lactation had the longest CI in all study groups (421.4 days). The LBGR cows were characterized with the shortest CI, but there was slight increase in second and third lactation. Age at first calving had a significant impact on cows’ age in each calving time. In the analysis, cow group was conducted that cows with age at first calving <24 months or in average 580.5 days at the time of fifth calving were 2156.7 days (5.9 years) old, but cows with age at first calving >30 months (932.6 days) at the time of fifth calving were 2560.9 days (7.3 years) old.

Keywords: age at first calving, calving interval, longevity, milk yield

Procedia PDF Downloads 202
21983 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 142
21982 Electrocardiogram Signal Denoising Using a Hybrid Technique

Authors: R. Latif, W. Jenkal, A. Toumanari, A. Hatim

Abstract:

This paper presents an efficient method of electrocardiogram signal denoising based on a hybrid approach. Two techniques are brought together to create an efficient denoising process. The first is an Adaptive Dual Threshold Filter (ADTF) and the second is the Discrete Wavelet Transform (DWT). The presented approach is based on three steps of denoising, the DWT decomposition, the ADTF step and the highest peaks correction step. This paper presents some application of the approach on some electrocardiogram signals of the MIT-BIH database. The results of these applications are promising compared to other recently published techniques.

Keywords: hybrid technique, ADTF, DWT, thresholding, ECG signal

Procedia PDF Downloads 308
21981 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 445
21980 Time-Domain Analysis of Pulse Parameters Effects on Crosstalk in High-Speed Circuits

Authors: Loubna Tani, Nabih Elouzzani

Abstract:

Crosstalk among interconnects and printed-circuit board (PCB) traces is a major limiting factor of signal quality in high-speed digital and communication equipments especially when fast data buses are involved. Such a bus is considered as a planar multiconductor transmission line. This paper will demonstrate how the finite difference time domain (FDTD) method provides an exact solution of the transmission-line equations to analyze the near end and the far end crosstalk. In addition, this study makes it possible to analyze the rise time effect on the near and far end voltages of the victim conductor. The paper also discusses a statistical analysis, based upon a set of several simulations. Such analysis leads to a better understanding of the phenomenon and yields useful information.

Keywords: multiconductor transmission line, crosstalk, finite difference time domain (FDTD), printed-circuit board (PCB), rise time, statistical analysis

Procedia PDF Downloads 422
21979 Efficient Filtering of Graph Based Data Using Graph Partitioning

Authors: Nileshkumar Vaishnav, Aditya Tatu

Abstract:

An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.

Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing

Procedia PDF Downloads 299