Search results for: approximate tandem repeats
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 281

Search results for: approximate tandem repeats

221 Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea

Authors: Santanu Chattopadhyay, Gautam Sarkar, Arabinda Das

Abstract:

This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification.

Keywords: Arrhythmia, congestive heart failure, discrete wavelet transform, electrocardiogram, myocardial ischemia, sleep apnea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668
220 Adomian Decomposition Method Associated with Boole-s Integration Rule for Goursat Problem

Authors: Mohd Agos Salim Nasir, Ros Fadilah Deraman, Siti Salmah Yasiran

Abstract:

The Goursat partial differential equation arises in linear and non linear partial differential equations with mixed derivatives. This equation is a second order hyperbolic partial differential equation which occurs in various fields of study such as in engineering, physics, and applied mathematics. There are many approaches that have been suggested to approximate the solution of the Goursat partial differential equation. However, all of the suggested methods traditionally focused on numerical differentiation approaches including forward and central differences in deriving the scheme. An innovation has been done in deriving the Goursat partial differential equation scheme which involves numerical integration techniques. In this paper we have developed a new scheme to solve the Goursat partial differential equation based on the Adomian decomposition (ADM) and associated with Boole-s integration rule to approximate the integration terms. The new scheme can easily be applied to many linear and non linear Goursat partial differential equations and is capable to reduce the size of computational work. The accuracy of the results reveals the advantage of this new scheme over existing numerical method.

Keywords: Goursat problem, partial differential equation, Adomian decomposition method, Boole's integration rule.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
219 Usability and Affordances: Examinations of Object-Naming and Object-Task Performance in Haptic Interfaces

Authors: Mia Sorensen

Abstract:

The introduction of haptic elements in a graphic user interfaces are becoming more widespread. Since haptics are being introduced rapidly into computational tools, investigating how these models affect Human-Computer Interaction would help define how to integrate and model new modes of interaction. The interest of this paper is to discuss and investigate the issues surrounding Haptic and Graphic User Interface designs (GUI) as separate systems, as well as understand how these work in tandem. The development of these systems is explored from a psychological perspective, based on how usability is addressed through learning and affordances, defined by J.J. Gibson. Haptic design can be a powerful tool, aiding in intuitive learning. The problems discussed within the text is how can haptic interfaces be integrated within a GUI without the sense of frivolity. Juxtaposing haptics and Graphic user interfaces has issues of motivation; GUI tends to have a performatory process, while Haptic Interfaces use affordances to learn tool use. In a deeper view, it is noted that two modes of perception, foveal and ambient, dictate perception. These two modes were once thought to work in tandem, however it has been discovered that these processes work independently from each other. Foveal modes interpret orientation is space which provide for posture, locomotion, and motor skills with variations of the sensory information, which instructs perceptions of object-task performance. It is contended, here, that object-task performance is a key element in the use of Haptic Interfaces because exploratory learning uses affordances in order to use an object, without meditating an experience cognitively. It is a direct experience that, through iteration, can lead to skill-sets. It is also indicated that object-task performance will not work as efficiently without the use of exploratory or kinesthetic learning practices. Therefore, object-task performance is not as congruently explored in GUI than it is practiced in Haptic interfaces.

Keywords: Affordances, Graphic User Interface, HapticInterfaces, Tool-Use, Object-Naming, Object-Task Performance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
218 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

Authors: Deepti Tamrakar, Pritee Khanna

Abstract:

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
217 Tidal Data Analysis using ANN

Authors: Ritu Vijay, Rekha Govil

Abstract:

The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.

Keywords: ANN, RBF, Tidal Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
216 A Proof for Bisection Width of Grids

Authors: Kemal Efe, Gui-Liang Feng

Abstract:

The optimal bisection width of r-dimensional N× · · ·× N grid is known to be Nr-1 when N is even, but when N is odd, only approximate values are available. This paper shows that the exact bisection width of grid is Nr -1 N-1 when N is odd.

Keywords: Grids, Parallel Architectures, Graph Bisection, VLSI Layouts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707
215 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

Authors: Abdulnasir Hossen, Ulrich Heute

Abstract:

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.

Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
214 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
213 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
212 Surface Flattening Assisted with 3D Mannequin Based On Minimum Energy

Authors: Shih-Wen Hsiao, Rong-Qi Chen, Chien-Yu Lin

Abstract:

The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.

Keywords: Surface flattening, Strain energy, Minimum energy, approximate implicit method, Fashion design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2599
211 An Overview of Some High Order and Multi-Level Finite Difference Schemes in Computational Aeroacoustics

Authors: Appanah Rao Appadu, Muhammad Zaid Dauhoo

Abstract:

In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.

Keywords: Optimised time derivative, dissipation, dispersion, cfl number, Nomenclature: k : time step, h : spatial step, β :advection velocity, r: cfl/Courant number, hkrβ= , w =θ, h : exact wave number, n :time level, RPE : Relative phase error per unit time step, AFM :modulus of amplification factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
210 An Approximate Lateral-Torsional Buckling Mode Function for Cantilever I-Beams

Authors: H. Ozbasaran

Abstract:

Lateral torsional buckling is a global buckling mode which should be considered in design of slender structural members under flexure about their strong axis. It is possible to compute the load which causes lateral torsional buckling of a beam by finite element analysis, however, closed form equations are needed in engineering practice for calculation ease which can be obtained by using energy method. In lateral torsional buckling applications of energy method, a proper function for the critical lateral torsional buckling mode should be chosen which can be thought as the variation of twisting angle along the buckled beam. Accuracy of the results depends on how close is the chosen function to the exact mode. Since critical lateral torsional buckling mode of the cantilever I-beams varies due to material properties, section properties and loading case, the hardest step is to determine a proper mode function in application of energy method. This paper presents an approximate function for critical lateral torsional buckling mode of doubly symmetric cantilever I-beams. Coefficient matrices are calculated for concentrated load at free end, uniformly distributed load and constant moment along the beam cases. Critical lateral torsional buckling modes obtained by presented function and exact solutions are compared. It is found that the modes obtained by presented function coincide with differential equation solutions for considered loading cases.

Keywords: Buckling mode, cantilever, lateral-torsional buckling, I-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2566
209 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
208 Molecular Analysis of Somaclonal Variation in Tissue Culture Derived Bananas Using MSAP and SSR Markers

Authors: Emma K. Sales, Nilda G. Butardo

Abstract:

The project was undertaken to determine the effects of modified tissue culture protocols e.g. age of culture and hormone levels (2,4-D) in generating somaclonal variation. Moreover, the utility of molecular markers (SSR and MSAP) in sorting off types/somaclones were investigated.

Results show that somaclonal variation is in effect due to prolonged subculture and high 2,4-D concentration. The resultant variation was observed to be due to high level of methylation events specifically cytosine methylation either at the internal or external cytosine and was identified by methylation sensitive amplification polymorphism (MSAP).Simple sequence repeats (SSR) on the other hand, was able to associate a marker to a trait of interest.

These therefore, show that molecular markers can be an important tool in sorting out variation/mutants at an early stage.

Keywords: Methylation, MSAP, somaclones, SSR, subculture, 2, 4-D.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3701
207 Implementation of Response Surface Methodology using in Small Brown Rice Peeling Machine: Part I

Authors: S. Bangphan, P. Bangphan, T.Boonkang

Abstract:

Implementation of response surface methodology (RSM) was employed to study the effects of two factor (rubber clearance and round per minute) in brown rice peeling machine of The optimal BROKENS yield (19.02, average of three repeats),.The optimized composition derived from RSM regression was analyzed using Regression analysis and Analysis of Variance (ANOVA). At a significant level α = 0.05, the values of Regression coefficient, R 2 (adj)were 97.35 % and standard deviation were 1.09513. The independent variables are initial rubber clearance, and round per minute parameters namely. The investigating responses are final rubber clearance, and round per minute (RPM). The restriction of the optimization is the designated.

Keywords: Brown rice, Response surface methodology(RSM), Rubber clearance, Round per minute (RPM), Peeling machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
206 Balancing Neural Trees to Improve Classification Performance

Authors: Asha Rani, Christian Micheloni, Gian Luca Foresti

Abstract:

In this paper, a neural tree (NT) classifier having a simple perceptron at each node is considered. A new concept for making a balanced tree is applied in the learning algorithm of the tree. At each node, if the perceptron classification is not accurate and unbalanced, then it is replaced by a new perceptron. This separates the training set in such a way that almost the equal number of patterns fall into each of the classes. Moreover, each perceptron is trained only for the classes which are present at respective node and ignore other classes. Splitting nodes are employed into the neural tree architecture to divide the training set when the current perceptron node repeats the same classification of the parent node. A new error function based on the depth of the tree is introduced to reduce the computational time for the training of a perceptron. Experiments are performed to check the efficiency and encouraging results are obtained in terms of accuracy and computational costs.

Keywords: Neural Tree, Pattern Classification, Perceptron, Splitting Nodes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
205 Video Classification by Partitioned Frequency Spectra of Repeating Movements

Authors: Kahraman Ayyildiz, Stefan Conrad

Abstract:

In this paper we present a system for classifying videos by frequency spectra. Many videos contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Motion of these areas usually repeats with a certain main frequency and several side frequencies. Transforming repeating motion to its frequency domain via FFT reveals these frequencies. Average amplitudes of frequency intervals can be seen as features of cyclic motion. Hence determining these features can help to classify videos with repeating movements. In this paper we explain how to compute frequency spectra for video clips and how to use them for classifying. Our approach utilizes series of image moments as a function. This function again is transformed into its frequency domain.

Keywords: action recognition, frequency feature, motion recognition, repeating movement, video classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
204 On the Maximum Theorem: A Constructive Analysis

Authors: Yasuhito Tanaka

Abstract:

We examine the maximum theorem by Berge from the point of view of Bishop style constructive mathematics. We will show an approximate version of the maximum theorem and the maximum theorem for functions with sequentially locally at most one maximum.

Keywords: Maximum theorem, Constructive mathematics, Sequentially locally at most one maximum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
203 Local Error Control in the RK5GL3 Method

Authors: J.S.C. Prentice

Abstract:

The RK5GL3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on a combination of a fifth-order Runge-Kutta method and 3-point Gauss-Legendre quadrature. In this paper we describe an effective local error control algorithm for RK5GL3, which uses local extrapolation with an eighth-order Runge-Kutta method in tandem with RK5GL3, and a Hermite interpolating polynomial for solution estimation at the Gauss-Legendre quadrature nodes.

Keywords: RK5GL3, RKrGLm, Runge-Kutta, Gauss-Legendre, Hermite interpolating polynomial, initial value problem, local error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
202 Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations

Authors: Nur Zakiah Mohd Saat, Abdul Aziz Jemain

Abstract:

Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.

Keywords: Exponential, maximum likehood estimators, observed gap, Saddlepoint approximations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
201 SIMGraph: Simplifying Contig Graph to Improve de Novo Genome Assembly Using Next-generation Sequencing Data

Authors: Chien-Ju Li, Chun-Hui Yu, Chi-Chuan Hwang, Tsunglin Liu , Darby Tien-Hao Chang

Abstract:

De novo genome assembly is always fragmented. Assembly fragmentation is more serious using the popular next generation sequencing (NGS) data because NGS sequences are shorter than the traditional Sanger sequences. As the data throughput of NGS is high, the fragmentations in assemblies are usually not the result of missing data. On the contrary, the assembled sequences, called contigs, are often connected to more than one other contigs in a complicated manner, leading to the fragmentations. False connections in such complicated connections between contigs, named a contig graph, are inevitable because of repeats and sequencing/assembly errors. Simplifying a contig graph by removing false connections directly improves genome assembly. In this work, we have developed a tool, SIMGraph, to resolve ambiguous connections between contigs using NGS data. Applying SIMGraph to the assembly of a fungus and a fish genome, we resolved 27.6% and 60.3% ambiguous contig connections, respectively. These results can reduce the experimental efforts in resolving contig connections.

Keywords: Contig graph, NGS, de novo assembly, scaffold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
200 An Energy Efficient Digital Baseband for Batteryless Remote Control

Authors: Wei-Da Toh, Yuan Gao, Minkyu Je

Abstract:

In this paper, an energy efficient digital baseband circuit for piezoelectric (PE) harvester powered batteryless remote control system is presented. Pulse mode PE harvester, which provides short duration of energy, is adopted to replace conventional chemical battery in wireless remote controller. The transmitter digital baseband repeats the control command transmission once the digital circuit is initiated by the power-on-reset. A power efficient data frame format is proposed to maximize the transmission repetition time. By using the proposed frame format and receiver clock and data recovery method, the receiver baseband is able to decode the command even when the received data has 20% error. The proposed transmitter and receiver baseband are implemented using FPGA and simulation results are presented.

Keywords: Clock and Data Recovery (CDR), Correlator, Digital Baseband, Gold Code, Power-On-Reset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
199 Approximate Solutions to Large Stein Matrix Equations

Authors: Khalide Jbilou

Abstract:

In the present paper, we propose numerical methods for solving the Stein equation AXC - X - D = 0 where the matrix A is large and sparse. Such problems appear in discrete-time control problems, filtering and image restoration. We consider the case where the matrix D is of full rank and the case where D is factored as a product of two matrices. The proposed methods are Krylov subspace methods based on the block Arnoldi algorithm. We give theoretical results and we report some numerical experiments.

Keywords: IEEEtran, journal, LATEX, paper, template.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
198 Wind Energy Resources Assessment and Micrositting on Different Areas of Libya: The Case Study in Darnah

Authors: F. Ahwide, Y. Bouker, K. Hatem

Abstract:

This paper presents long term wind data analysis in terms of annual and diurnal variations at different areas of Libya. The data of the wind speed and direction are taken each ten minutes for a period, at least two years, are used in the analysis. ‘WindPRO’ software and Excel workbook were used for the wind statistics and energy calculations. As for Darnah, average speeds are 10m, 20m and 40m and 6.57 m/s, 7.18 m/s, and 8.09 m/s, respectively. Highest wind speeds are observed at SSW, followed by S, WNW and NW sectors. Lowest wind speeds are observed between N and E sectors. Most frequent wind directions are NW and NNW. Hence, wind turbines can be installed against these directions. The most powerful sector is NW (31.3% of total expected wind energy), followed by 17.9% SSW, 11.5% NNW and 8.2% WNW

In Excel workbook, an estimation of annual energy yield at position of Derna, Al-Maqrun, Tarhuna and Al-Asaaba meteorological mast has been done, considering a generic wind turbine of 1.65 MW. (mtORRES, TWT 82-1.65MW) in position of meteorological mast. Three other turbines have been tested and a reduction of 18% over the net AEP. At 80m, the estimation of energy yield for Derna, Al- Maqrun, Tarhuna and Asaaba is 6.78 GWh or 3390 equivalent hours, 5.80 GWh or 2900 equivalent hours, 4.91 GWh or 2454 equivalent hours and 5.08 GWh or 2541 equivalent hours respectively. It seems a fair value in the context of a possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Furthermore, an estimation of annual energy yield at positions of Misalatha, Azizyah and Goterria meteorological mast has been done, considering a generic wind turbine of 2 MW. We found that, at 80 m the estimation of energy yield is 3.12 GWh or 1557 equivalent hours, 4.47 GWh or 2235 equivalent hours and 4.07GWh or 2033 respectively.

It seems a very poor value in the context of possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Anyway, more data and a detailed wind farm study would be necessary to draw conclusions.

Keywords: Wind turbines, wind data, energy yield, micrositting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2637
197 A Comprehensive Analysis for Widespread use of Electric Vehicles

Authors: Yu Zhou, Zhaoyang Dong, Xiaomei Zhao

Abstract:

This paper mainly investigates the environmental and economic impacts of worldwide use of electric vehicles. It can be concluded that governments have good reason to promote the use of electric vehicles. First, the global vehicles population is evaluated with the help of grey forecasting model and the amount of oil saving is estimated through approximate calculation. After that, based on the game theory, the amount and types of electricity generation needed by electronic vehicles are established. Finally, some conclusions on the government-s attitudes are drawn.

Keywords: electronic vehicles, grey prediction, game theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
196 The Homotopy Analysis Method for Solving Discontinued Problems Arising in Nanotechnology

Authors: Hassan Saberi-Nik, Mahin Golchaman

Abstract:

This paper applies the homotopy analysis method method to a nonlinear differential-difference equation arising in nanotechnology. Continuum hypothesis on nanoscales is invalid, and a differential-difference model is considered as an alternative approach to describing discontinued problems. Comparison of the approximate solution with the exact one reveals that the method is very effective.

Keywords: Homotopy analysis method, differential-difference, nanotechnology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
195 A New Derivative-Free Quasi-Secant Algorithm For Solving Non-Linear Equations

Authors: F. Soleymani, M. Sharifi

Abstract:

Most of the nonlinear equation solvers do not converge always or they use the derivatives of the function to approximate the root of such equations. Here, we give a derivative-free algorithm that guarantees the convergence. The proposed two-step method, which is to some extent like the secant method, is accompanied with some numerical examples. The illustrative instances manifest that the rate of convergence in proposed algorithm is more than the quadratically iterative schemes.

Keywords: Non-linear equation, iterative methods, derivative-free, convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
194 Students’ Perception of Using Dental e-Models in an Inquiry-Based Curriculum

Authors: Yanqi Yang, Chongshan Liao, Cheuk Hin Ho, Susan Bridges

Abstract:

Aim: To investigate students’ perceptions of using e-models in an inquiry-based curriculum. Approach: 52 second-year dental students completed a pre- and post-test questionnaire relating to their perceptions of e-models and their use in inquiry-based learning. The pre-test occurred prior to any learning with e-models. The follow-up survey was conducted after one year's experience of using e-models. Results: There was no significant difference between the two sets of questionnaires regarding students’ perceptions of the usefulness of e-models and their willingness to use e-models in future inquiry-based learning. Most students preferred using both plaster models and e-models in tandem. Conclusion: Students did not change their attitude towards e-models and most of them agreed or were neutral that e-models are useful in inquiry-based learning. Whilst recognizing the utility of 3D models for learning, students' preference for combining these with solid models has implications for the development of haptic sensibility in an operative discipline.

Keywords: E-models, inquiry-based curriculum, education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
193 Determination of Penicillins Residues in Livestock and Marine Products by LC/MS/MS

Authors: Ji Young Song, Soo Jung Hu, Hyunjin Joo, Joung Boon Hwang, Mi Ok Kim, Shin Jung Kang, Dae Hyun Cho

Abstract:

Multi-residue analysis method for penicillins was developed and validated in bovine muscle, chicken, milk, and flatfish. Detection was based on liquid chromatography tandem mass spectrometry (LC/MS/MS). The developed method was validated for specificity, precision, recovery, and linearity. The analytes were extracted with 80% acetonitrile and clean-up by a single reversed-phase solid-phase extraction step. Six penicillins presented recoveries higher than 76% with the exception of Amoxicillin (59.7%). Relative standard deviations (RSDs) were not more than 10%. LOQs values ranged from 0.1 and to 4.5 ug/kg. The method was applied to 128 real samples. Benzylpenicillin was detected in 15 samples and Cloxacillin was detected in 7 samples. Oxacillin was detected in 2 samples. But the detected levels were under the MRL levels for penicillins in samples.

Keywords: Penicillins, livestock product, Multi-residue analysis, LC/MS/MS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3418
192 Comparing Interval Estimators for Reliability in a Dependent Set-up

Authors: Alessandro Barbiero

Abstract:

In this paper some procedures for building confidence intervals for the reliability in stress-strength models are discussed and empirically compared. The particular case of a bivariate normal setup is considered. The confidence intervals suggested are obtained employing approximations or asymptotic properties of maximum likelihood estimators. The coverage and the precision of these intervals are empirically checked through a simulation study. An application to real paired data is also provided.

Keywords: Approximate estimators, asymptotic theory, confidence interval, Monte Carlo simulations, stress-strength, variance estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474