Search results for: error matrices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2173

Search results for: error matrices

2143 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 109
2142 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations

Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed

Abstract:

An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.

Keywords: approximant, error estimate, tau method, overdetermination

Procedia PDF Downloads 574
2141 A Study on the Influence of Planet Pin Parallelism Error to Load Sharing Factor

Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Yong Yang, Gang Shen

Abstract:

In this paper, planet pin parallelism error, which is one of manufacturing error of planet carrier, is employed as a main variable to influence planet load sharing factor. This error is categorize two group: (i) pin parallelism error with rotation on the axis perpendicular to the tangent of base circle of gear(x axis rotation in this paper) (ii) pin parallelism error with rotation on the tangent axis of base circle of gear(y axis rotation in this paper). For this study, the planetary gear system in 1.5MW wind turbine is applied and pure torsional rigid body model of this planetary gear is built using Solidworks and MSC.ADAMS. Based on quantified parallelism error and simulation model, dynamics simulation of planetary gear is carried out to obtain dynamic mesh load results with each type of error and load sharing factor is calculated with mesh load results. Load sharing factor formula and the suggestion for planetary reliability design is proposed with the conclusion of this study.

Keywords: planetary gears, planet load sharing, MSC. ADAMS, parallelism error

Procedia PDF Downloads 371
2140 Unequal Error Protection of VQ Image Transmission System

Authors: Khelifi Mustapha, A. Moulay lakhdar, I. Elawady

Abstract:

We will study the unequal error protection for VQ image. We have used the Reed Solomon (RS) Codes as Channel coding because they offer better performance in terms of channel error correction over a binary output channel. One such channel (binary input and output) should be considered if it is the case of the application layer, because it includes all the features of the layers located below and on the what it is usually not feasible to make changes.

Keywords: vector quantization, channel error correction, Reed-Solomon channel coding, application

Procedia PDF Downloads 332
2139 Ultrasonic Agglomeration of Protein Matrices and Its Effect on Thermophysical, Macro- and Microstructural Properties

Authors: Daniela Rivera-Tobar Mario Perez-Won, Roberto Lemus-Mondaca, Gipsy Tabilo-Munizaga

Abstract:

Different dietary trends worldwide seek to consume foods with anti-inflammatory properties, rich in antioxidants, proteins, and unsaturated fatty acids that lead to better metabolic, intestinal, mental, and cardiac health. In this sense, food matrices with high protein content based on macro and microalgae are an excellent alternative to meet the new needs of consumers. An emerging and environmentally friendly technology for producing protein matrices is ultrasonic agglomeration. It consists of the formation of permanent bonds between particles, improving the agglomeration of the matrix compared to conventionally agglomerated products (compression). Among the advantages of this process are the reduction of nutrient loss and the avoidance of binding agents. The objective of this research was to optimize the ultrasonic agglomeration process in matrices composed of Spirulina (Arthrospira platensis) powder and Cochayuyo (Durvillae Antartica) flour, by means of the response variable (Young's modulus) and the independent variables were the process conditions (percentage of ultrasonic amplitude: 70, 80 and 90; ultrasonic agglomeration times and cycles: 20, 25 and 30 seconds, and 3, 4 and 5). It was evaluated using a central composite design and analyzed using response surface methodology. In addition, the effects of agglomeration on thermophysical and microstructural properties were evaluated. It was determined that ultrasonic compression with 80 and 90% amplitude caused conformational changes according to Fourier infrared spectroscopy (FTIR) analysis, the best condition with respect to observed microstructure images (SEM) and differential scanning calorimetry (DSC) analysis, was the condition of 90% amplitude 25 and 30 seconds with 3 and 4 cycles of ultrasound. In conclusion, the agglomerated matrices present good macro and microstructural properties which would allow the design of food systems with better nutritional and functional properties.

Keywords: ultrasonic agglomeration, physical properties of food, protein matrices, macro and microalgae

Procedia PDF Downloads 34
2138 A New Approach to Interval Matrices and Applications

Authors: Obaid Algahtani

Abstract:

An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.

Keywords: interval analysis, interval matrices, state space model, Kalman Filter

Procedia PDF Downloads 396
2137 Mechanical Properties of Polyurethane Scaffolds Reinforced with Green Nanofibers for Applications in Soft Tissue Regeneration

Authors: Mustafa Abu Ghalia, Yaser Dahman

Abstract:

A new class of polyurethane (PU) reinforced with green bacterial cellulose nanofibers (BC) were prepared using a solvent casting method, with the goal of fabricating green nanocomposites. Four series classes of BC (1, 2.5, 5, and 10 wt%) were reinforced into PU matrices via BC surface modification and subsequently BC-grafted into PU throughout silane coupling agent to improve BC dispersion and its interfacial interaction. The experiment results from the tensile tester were evaluated according to the response surface method (RSM) for optimizing the impacts of variable parameters, pore size, porosity, and BC contents on the mechanical properties. The compressive strength for PU-5 BC wt% was about 9.8 MPa, and decrease when being generated prosperity to recorded at 4.9 MPa. Nielson model was applied to investigate the BC stress concentration on the PU matrices. Likewise, krenche and Hapli-Tasi model were employed to evaluate the BC nanofiber reinforcement potential and BC orientation into PU matrices. The analysis of variance (ANOVA) demonstrated that only BC loading has a significant effect in increases tensile strength, young’s modulus, and a flexural modulus of the PU-BC nanocomposites. The optimal factors of the variables experiment confirmed to be 5 wt% for BC, 230 for pore size, and 80 % for porosity. Scanning electron microscopy (SEM) micrographs showed that the uniform distribution of nanofibers in the PU matrices with the addition of BC 5 wt %. Hydrolytic degradation revealed that the weight loss in PU-BC scaffold is higher than PU-BC wt %.

Keywords: polyurethane scaffold, mechanical properties, tissue engineering, polyurethane

Procedia PDF Downloads 175
2136 A Study on the Influence of Pin-Hole Position Error of Carrier on Mesh Load and Planet Load Sharing of Planetary Gear

Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Gang Shen

Abstract:

For planetary gear system, Planet pin-hole position accuracy is one of most influential factor to efficiency and reliability of planetary gear system. This study considers planet pin-hole position error as a main input error for model and build multi body dynamic simulation model of planetary gear including planet pin-hole position error using MSC. ADAMS. From this model, the mesh load results between meshing gears in each pin-hole position error cases are obtained and based on these results, planet load sharing factor which reflect equilibrium state of mesh load sharing between whole meshing gear pair is calculated. Analysis result indicates that the pin-hole position error of tangential direction cause profound influence to mesh load and load sharing factor between meshing gear pair.

Keywords: planetary gear, load sharing factor, multibody dynamics, pin-hole position error

Procedia PDF Downloads 548
2135 An Efficient Algorithm of Time Step Control for Error Correction Method

Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim

Abstract:

The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.

Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points

Procedia PDF Downloads 536
2134 The Tracking and Hedging Performances of Gold ETF Relative to Some Other Instruments in the UK

Authors: Abimbola Adedeji, Ahmad Shauqi Zubir

Abstract:

This paper examines the profitability and risk between investing in gold exchange traded funds (ETFs) and gold mutual funds compares to gold prices. The main focus in determining whether there are similarities or differences between those financial products is the tracking error. The importance of understanding the similarities or differences between the gold ETFs, gold mutual funds and gold prices is derived from the fact that gold ETFs and gold mutual funds are used as substitutions for investors who are looking to profit from gold prices although they are short in capital. 10 hypotheses were tested. There are 3 types of tracking error used. Tracking error 1 and 3 gives results that differentiate between types of ETFs and mutual funds, hence yielding the answers in answering the hypotheses that were developed. However, tracking error 2 failed to give the answer that could shed light on the questions raised in this study. All of the results in tracking error 2 technique only telling us that the difference between the ups and downs of the financial instruments are similar, statistically to the physical gold prices movement.

Keywords: gold etf, gold mutual funds, tracking error

Procedia PDF Downloads 394
2133 Charging-Vacuum Helium Mass Spectrometer Leak Detection Technology in the Application of Space Products Leak Testing and Error Control

Authors: Jijun Shi, Lichen Sun, Jianchao Zhao, Lizhi Sun, Enjun Liu, Chongwu Guo

Abstract:

Because of the consistency of pressure direction, more short cycle, and high sensitivity, Charging-Vacuum helium mass spectrometer leak testing technology is the most popular leak testing technology for the seal testing of the spacecraft parts, especially the small and medium size ones. Usually, auxiliary pump was used, and the minimum detectable leak rate could reach 5E-9Pa•m3/s, even better on certain occasions. Relative error is more important when evaluating the results. How to choose the reference leak, the background level of helium, and record formats would affect the leak rate tested. In the linearity range of leak testing system, it would reduce 10% relative error if the reference leak with larger leak rate was used, and the relative error would reduce obviously if the background of helium was low efficiently, the record format of decimal was used, and the more stable data were recorded.

Keywords: leak testing, spacecraft parts, relative error, error control

Procedia PDF Downloads 427
2132 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack

Procedia PDF Downloads 293
2131 Local Radial Basis Functions for Helmholtz Equation in Seismic Inversion

Authors: Hebert Montegranario, Mauricio Londoño

Abstract:

Solutions of Helmholtz equation are essential in seismic imaging methods like full wave inversion, which needs to solve many times the wave equation. Traditional methods like Finite Element Method (FEM) or Finite Differences (FD) have sparse matrices but may suffer the so called pollution effect in the numerical solutions of Helmholtz equation for large values of the wave number. On the other side, global radial basis functions have a better accuracy but produce full matrices that become unstable. In this research we combine the virtues of both approaches to find numerical solutions of Helmholtz equation, by applying a meshless method that produce sparse matrices by local radial basis functions. We solve the equation with absorbing boundary conditions of the kind Clayton-Enquist and PML (Perfect Matched Layers) and compared with results in standard literature, showing a promising performance by tackling both the pollution effect and matrix instability.

Keywords: Helmholtz equation, meshless methods, seismic imaging, wavefield inversion

Procedia PDF Downloads 516
2130 Robust ANOVA: An Illustrative Study in Horticultural Crop Research

Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini

Abstract:

An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyze horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.

Keywords: outliers, robust ANOVA, horticulture, cook distance, type I error

Procedia PDF Downloads 354
2129 A Survey of 2nd Year Students' Frequent Writing Error and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 28 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.05. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 1.0046, 1.1374, 1.297, and 1.0065 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally, the students’ overall satisfaction of the participatory error correction process is in high level (Mean=4.32, S.D.=0.92).

Keywords: coded indirect corrective feedback, participatory error correction process, error treatment, humanities and social sciences

Procedia PDF Downloads 492
2128 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: spectral density, stable processes, aliasing, non parametric

Procedia PDF Downloads 105
2127 A Novel Way to Create Qudit Quantum Error Correction Codes

Authors: Arun Moorthy

Abstract:

Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.

Keywords: qudit, error correction, quantum, qubit

Procedia PDF Downloads 130
2126 Error Analysis of Students’ Freewriting: A Study of Adult English Learners’ Errors

Authors: Louella Nicole Gamao

Abstract:

Writing in English is accounted as a complex skill and process for foreign language learners who commit errors in writing are found as an inevitable part of language learners' writing. This study aims to explore and analyze the learners of English-as-a foreign Language (EFL) freewriting in a University in Taiwan by identifying the category of mistakes that often appear in their freewriting activity and analyzing the learners' awareness of each error. Hopefully, this present study will be able to gain further information about students' errors in their English writing that may contribute to further understanding of the benefits of freewriting activity that can be used for future purposes as a powerful tool in English writing courses for EFL classes. The present study adopted the framework of error analysis proposed by Dulay, Burt, and Krashen (1982), which consisted of a compilation of data, identification of errors, classification of error types, calculation of frequency of each error, and error interpretation. Survey questionnaires regarding students' awareness of errors were also analyzed and discussed. Using quantitative and qualitative approaches, this study provides a detailed description of the errors found in the students'freewriting output, explores the similarities and differences of the students' errors in both academic writing and freewriting, and lastly, analyzes the students' perception of their errors.

Keywords: error, EFL, freewriting, taiwan, english

Procedia PDF Downloads 76
2125 Delaunay Triangulations Efficiency for Conduction-Convection Problems

Authors: Bashar Albaalbaki, Roger E. Khayat

Abstract:

This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.

Keywords: conduction-convection problems, Delaunay triangulation, discretization error, finite volume method

Procedia PDF Downloads 64
2124 Performance of Total Vector Error of an Estimated Phasor within Local Area Networks

Authors: Ahmed Abdolkhalig, Rastko Zivanovic

Abstract:

This paper evaluates the Total Vector Error of an estimated Phasor as define in IEEE C37.118 standard within different medium access in Local Area Networks (LAN). Three different LAN models (CSMA/CD, CSMA/AMP, and Switched Ethernet) are evaluated. The Total Vector Error of the estimated Phasor has been evaluated for the effect of Nodes Number under the standardized network Band-width values defined in IEC 61850-9-2 communication standard (i.e. 0.1, 1, and 10 Gbps).

Keywords: phasor, local area network, total vector error, IEEE C37.118, IEC 61850

Procedia PDF Downloads 279
2123 A Comparative Analysis of ARIMA and Threshold Autoregressive Models on Exchange Rate

Authors: Diteboho Xaba, Kolentino Mpeta, Tlotliso Qejoe

Abstract:

This paper assesses the in-sample forecasting of the South African exchange rates comparing a linear ARIMA model and a SETAR model. The study uses a monthly adjusted data of South African exchange rates with 420 observations. Akaike information criterion (AIC) and the Schwarz information criteria (SIC) are used for model selection. Mean absolute error (MAE), root mean squared error (RMSE) and mean absolute percentage error (MAPE) are error metrics used to evaluate forecast capability of the models. The Diebold –Mariano (DM) test is employed in the study to check forecast accuracy in order to distinguish the forecasting performance between the two models (ARIMA and SETAR). The results indicate that both models perform well when modelling and forecasting the exchange rates, but SETAR seemed to outperform ARIMA.

Keywords: ARIMA, error metrices, model selection, SETAR

Procedia PDF Downloads 217
2122 Radical Scavenging Activity of Protein Extracts from Pulse and Oleaginous Seeds

Authors: Silvia Gastaldello, Maria Grillo, Luca Tassoni, Claudio Maran, Stefano Balbo

Abstract:

Antioxidants are nowadays attractive not only for the countless benefits to the human and animal health, but also for the perspective of use as food preservative instead of synthetic chemical molecules. In this study, the radical scavenging activity of six protein extracts from pulse and oleaginous seeds was evaluated. The selected matrices are Pisum sativum (yellow pea from two different origins), Carthamus tinctorius (safflower), Helianthus annuus (sunflower), Lupinus luteus cv Mister (lupin) and Glycine max (soybean), since they are economically interesting for both human and animal nutrition. The seeds were grinded and proteins extracted from 20mg powder with a specific vegetal-extraction kit. Proteins have been quantified through Bradford protocol and scavenging activity was revealed using DPPH assay, based on radical DPPH (2,2-diphenyl-1-picrylhydrazyl) absorbance decrease in the presence of antioxidants molecules. Different concentrations of the protein extract (1, 5, 10, 50, 100, 500 µg/ml) were mixed with DPPH solution (DPPH 0,004% in ethanol 70% v/v). Ascorbic acid was used as a scavenging activity standard reference, at the same six concentrations of protein extracts, while DPPH solution was used as control. Samples and standard were prepared in triplicate and incubated for 30 minutes in dark at room temperature, the absorbance was read at 517nm (ABS30). Average and standard deviation of absorbance values were calculated for each concentration of samples and standard. Statistical analysis using t-students and p-value were performed to assess the statistical significance of the scavenging activity difference between the samples (or standard) and control (ABSctrl). The percentage of antioxidant activity has been calculated using the formula [(ABSctrl-ABS30)/ABSctrl]*100. The obtained results demonstrate that all matrices showed antioxidant activity. Ascorbic acid, used as standard, exhibits a 96% scavenging activity at the concentration of 500 µg/ml. At the same conditions, sunflower, safflower and yellow peas revealed the highest antioxidant performance among the matrices analyzed, with an activity of 74%, 68% and 70% respectively (p < 0.005). Although lupin and soybean exhibit a lower antioxidant activity compared to the other matrices, they showed a percentage of 46 and 36 respectively. All these data suggest the possibility to use undervalued edible matrices as antioxidants source. However, further studies are necessary to investigate a possible synergic effect of several matrices as well as the impact of industrial processes for a large-scale approach.

Keywords: antioxidants, DPPH assay, natural matrices, vegetal proteins

Procedia PDF Downloads 391
2121 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal

Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert

Abstract:

We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.

Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving

Procedia PDF Downloads 334
2120 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 296
2119 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 95
2118 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare

Abstract:

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Keywords: concatenated coding, low–density parity–check codes, array code, error floors

Procedia PDF Downloads 329
2117 Polymer Matrices Based on Natural Compounds: Synthesis and Characterization

Authors: Sonia Kudlacik-Kramarczyk, Anna Drabczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec

Abstract:

Introduction: In the preparation of polymer materials, compounds of natural origin are currently gaining more and more interest. This is particularly noticeable in the case of synthesis of materials considered for biomedical use. Then, selected material has to meet many requirements. It should be characterized by non-toxicity, biodegradability and biocompatibility. Therefore special attention is directed to substances such as polysaccharides, proteins or substances that are the basic building components of proteins, i.e. amino acids. These compounds may be crosslinked with other reagents that leads to the preparation of polymer matrices. Such amino acids as e.g. cysteine or histidine. On the other hand, previously mentioned requirements may be met by polymers obtained as a result of biosynthesis, e.g. polyhydroxybutyrate. This polymer belongs to the group of aliphatic polyesters that is synthesized by microorganisms (selected strain of bacteria) under specific conditions. It is possible to modify matrices based on given polymer with substances of various origin. Such a modification may result in the change of their properties or/and in providing the material with new features desirable in viewpoint of specific application. Described materials are synthesized using UV radiation. Process of photopolymerization is fast, waste-free and enables to obtain final products with favorable properties. Methodology: Polymer matrices have been prepared by means of photopolymerization. First step involved the preparation of solutions of particular reagents and mixing them in the appropriate ratio. Next, crosslinking agent and photoinitiator have been added to the reaction mixture and the whole was poured into the Petri dish and treated with UV radiation. After the synthesis, polymer samples were dried at room temperature and subjected to the numerous analyses aimed at the determining their physicochemical properties. Firstly, sorption properties of obtained polymer matrices have been determined. Next, mechanical properties have been characterized, i.e. tensile strength. The ability to deformation under applied stress of all prepared polymer matrices has been checked. Such a property is important in viewpoint of the application of analyzed materials e.g. as wound dressings. Wound dressings have to be elastic because depending on the location of the wound and its mobility, such a dressing has to adhere properly to the wound. Furthermore, considering the use of the materials for biomedical purposes it is essential to determine its behavior in environments simulating these ones occurring in human body. Therefore incubation studies using selected liquids have also been conducted. Conclusions: As a result of photopolymerization process, polymer matrices based on natural compounds have been prepared. These exhibited favorable mechanical properties and swelling ability. Moreover, biocompatibility in relation to simulated body fluids has been stated. Therefore it can be concluded that analyzed polymer matrices constitute an interesting materials that may be considered for biomedical use and may be subjected to the further more advanced analyses using specific cell lines.

Keywords: photopolymerization, polymer matrices, simulated body fluids, swelling properties

Procedia PDF Downloads 99
2116 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step

Procedia PDF Downloads 420
2115 An Improved Model of Estimation Global Solar Irradiation from in situ Data: Case of Oran Algeria Region

Authors: Houcine Naim, Abdelatif Hassini, Noureddine Benabadji, Alex Van Den Bossche

Abstract:

In this paper, two models to estimate the overall monthly average daily radiation on a horizontal surface were applied to the site of Oran (35.38 ° N, 0.37 °W). We present a comparison between the first one is a regression equation of the Angstrom type and the second model is developed by the present authors some modifications were suggested using as input parameters: the astronomical parameters as (latitude, longitude, and altitude) and meteorological parameters as (relative humidity). The comparisons are made using the mean bias error (MBE), root mean square error (RMSE), mean percentage error (MPE), and mean absolute bias error (MABE). This comparison shows that the second model is closer to the experimental values that the model of Angstrom.

Keywords: meteorology, global radiation, Angstrom model, Oran

Procedia PDF Downloads 204
2114 Cryotopic Macroporous Polymeric Matrices for Regenerative Medicine and Tissue Engineering Applications

Authors: Archana Sharma, Vijayashree Nayak, Ashok Kumar

Abstract:

Three-dimensional matrices were fabricated from blend of natural-natural polymers like carrageenan-gelatin and synthetic -natural polymers such as PEG- gelatin (PEG of different molecular weights (2,000 and 6,000) using two different crosslinkers; glutaraldehyde and EDC-NHS by cryogelation technique. Blends represented a feasible approach to design 3-D scaffolds with controllable mechanical, physical and biochemical properties without compromising biocompatibility and biodegradability. These matrices possessed interconnected porous structure, good mechanical strength, biodegradable nature, constant swelling kinetics, ability to withstand high temperature and visco-elastic behavior. Hemocompatibility of cryogel matrices was determined by coagulation assays and hemolytic activity assay which demonstrated that these cryogels have negligible effects on coagulation time and have excellent blood compatibility. In vitro biocompatibility (cell-matrix interaction) inferred good cell adhesion, proliferation, and secretion of ECM on matrices. These matrices provide a microenvironment for the growth, proliferation, differentiation and secretion of ECM of different cell types such as IMR-32, C2C12, Cos-7, rat bone marrow derived MSCs and human bone marrow MSCs. Hoechst 33342 and PI staining also confirmed that the cells were uniformly distributed, adhered and proliferated properly on the cryogel matrix. An ideal scaffold used for tissue engineering application should allow the cells to adhere, proliferate and maintain their functionality. Neurotransmitter analysis has been done which indicated that IMR-32 cells adhered, proliferated and secreted neurotransmitters when they interacted with these matrices which showed restoration of their functionality. The cell-matrix interaction up to molecular level was also evaluated so to check genotoxicity and protein expression profile which indicated that these cryogel matrices are non-genotoxic and maintained biofunctionality of cells growing on these matrices. All these cryogels, when implanted subcutaneously in balb/c mice, showed no adverse systemic or local toxicity effects at implantation site. There was no significant increase in inflammatory cell count has otherwise been observed after scaffold implantation. These cryogels are supermacroporous and this porous structure allows cell infiltration and proliferation of host cells. This showed the integration and presence of infiltrated cells into the cryogel implants. Histological analysis confirmed that the implanted cryogels do not have any adverse effect in spite of host immune system recognition at the site of implantation, on its surrounding tissues and other vital host organs. In vivo biocompatibility study after in vitro biocompatibility analysis has also concluded that these synthesized cryogels act as important biological substitutes, more adaptable and appropriate for transplantation. Thus, these cryogels showed their potential for soft tissue engineering applications.

Keywords: cryogelation, hemocompatibility, in vitro biocompatibility, in vivo biocompatibility, soft tissue engineering applications

Procedia PDF Downloads 193