Search results for: Processing Parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2662

Search results for: Processing Parameter

2362 Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect

Authors: Yue Hu, Xilu Zhao, Takao Yamaguchi, Manabu Sasajima, Yoshio Koike, Akira Hara

Abstract:

This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. The material character of cone paper and the loudspeaker edge were the design parameters, and the vibration displacement of the cone paper was the objective function. The results of the analysis showed that the design had high accuracy as compared to the predicted value. These results suggested that although the parameter design is difficult, with experience and intuition, the design can be performed easily using the optimized design found with the acoustic analysis software.

Keywords: Air viscosity, design parameters, loudspeaker, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
2361 Lagrange-s Inversion Theorem and Infiltration

Authors: Pushpa N. Rathie, Prabhata K. Swamee, André L. B. Cavalcante, Luan Carlos de S. M. Ozelim

Abstract:

Implicit equations play a crucial role in Engineering. Based on this importance, several techniques have been applied to solve this particular class of equations. When it comes to practical applications, in general, iterative procedures are taken into account. On the other hand, with the improvement of computers, other numerical methods have been developed to provide a more straightforward methodology of solution. Analytical exact approaches seem to have been continuously neglected due to the difficulty inherent in their application; notwithstanding, they are indispensable to validate numerical routines. Lagrange-s Inversion Theorem is a simple mathematical tool which has proved to be widely applicable to engineering problems. In short, it provides the solution to implicit equations by means of an infinite series. To show the validity of this method, the tree-parameter infiltration equation is, for the first time, analytically and exactly solved. After manipulating these series, closed-form solutions are presented as H-functions.

Keywords: Green-Ampt Equation, Lagrange's Inversion Theorem, Talsma-Parlange Equation, Three-Parameter Infiltration Equation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
2360 Improving Cleanability by Changing Fish Processing Equipment Design

Authors: Lars A. L. Giske, Ola J. Mork, Emil Bjoerlykhaug

Abstract:

The design of fish processing equipment greatly impacts how easy the cleaning process for the equipment is. This is a critical issue in fish processing, as cleaning of fish processing equipment is a task that is both costly and time consuming, in addition to being very important with regards to product quality. Even more, poorly cleaned equipment could in the worst case lead to contaminated product from which consumers could get ill. This paper will elucidate how equipment design changes could improve the work for the cleaners and saving money for the fish processing facilities by looking at a case for product design improvements. The design of fish processing equipment largely determines how easy it is to clean. “Design for cleaning” is the new hype in the industry and equipment where the ease of cleaning is prioritized gets a competitive advantage over equipment in which design for cleaning has not been prioritized. Design for cleaning is an important research area for equipment manufacturers. SeaSide AS is doing continuously improvements in the design of their products in order to gain a competitive advantage. The focus in this paper will be conveyors for internal logistic and a product called the “electro stunner” will be studied with regards to “Design for cleaning”. Often together with SeaSide’s customers, ideas for new products or product improvements are sketched out, 3D-modelled, discussed, revised, built and delivered. Feedback from the customers is taken into consideration, and the product design is revised once again. This loop was repeated multiple times, and led to new product designs. The new designs sometimes also cause the manufacturing processes to change (as in going from bolted to welded connections). Customers report back that the concrete changes applied to products by SeaSide has resulted in overall more easily cleaned equipment. These changes include, but are not limited to; welded connections (opposed to bolted connections), gaps between contact faces, opening up structures to allow cleaning “inside” equipment, and generally avoiding areas in which humidity and water may gather and build up. This is important, as there will always be bacteria in the water which will grow if the area never dries up. The work of creating more cleanable design is still ongoing, and will “never” be finished as new designs and new equipment will have their own challenges.

Keywords: Cleaning, design, equipment, fish processing, innovation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
2359 Study on Crater Detection Using FLDA

Authors: Yoshiaki Takeda, Norifumi Aoyama, Takahiro Tanaami, Syouhei Honda, Kenta Tabata, Hiroyuki Kamata

Abstract:

In this paper, we validate crater detection in moon surface image using FLDA. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) project aiming at the pin-point landing to the moon surface. The point where the lander should land is judged by the position relations of the craters obtained via camera, so the real-time image processing becomes important element. Besides, in the SLIM project, 400kg-class lander is assumed, therefore, high-performance computers for image processing cannot be equipped. We are studying various crater detection methods such as Haar-Like features, LBP, and PCA. And we think these methods are appropriate to the project, however, to identify the unlearned images obtained by actual is insufficient. In this paper, we examine the crater detection using FLDA, and compare with the conventional methods.

Keywords: Crater Detection, Fisher Linear Discriminant Analysis , Haar-Like Feature, Image Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
2358 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy

Abstract:

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
2357 Mechanism of Alcohol Related Disruption of the Error Monitoring and Processing System

Authors: M. O. Welcome, Y. E. Razvodovsky, E. V. Pereverzeva, V. A. Pereverzev

Abstract:

The error monitoring and processing system, EMPS is the system located in the substantia nigra of the midbrain, basal ganglia and cortex of the forebrain, and plays a leading role in error detection and correction. The main components of EMPS are the dopaminergic system and anterior cingulate cortex. Although, recent studies show that alcohol disrupts the EMPS, the ways in which alcohol affects this system are poorly understood. Based on current literature data, here we suggest a hypothesis of alcohol-related glucose-dependent system of error monitoring and processing, which holds that the disruption of the EMPS is related to the competency of glucose homeostasis regulation, which in turn may determine the dopamine level as a major component of EMPS. Alcohol may indirectly disrupt the EMPS by affecting dopamine level through disorders in blood glucose homeostasis regulation.

Keywords: Alcohol related disruption, Error monitoring andprocessing system, Mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354
2356 Synthesis of a Control System of a Deterministic Chaotic Process in the Class of Two-Parameter Structurally Stable Mappings

Authors: M. Beisenbi, A. Sagymbay, S. Beisembina, A. Satpayeva

Abstract:

In this paper, the problem of unstable and deterministic chaotic processes in control systems is considered. The synthesis of a control system in the class of two-parameter structurally stable mappings is demonstrated. This is realized via the gradient-velocity method of Lyapunov vector functions. It is shown that the gradient-velocity method of Lyapunov vector functions allows generating an aperiodic robust stable system with the desired characteristics. A simple solution to the problem of synthesis of control systems for unstable and deterministic chaotic processes is obtained. Moreover, it is applicable for complex systems.

Keywords: Control system synthesis, deterministic chaotic processes, Lyapunov vector function, robust stability, structurally stable mappings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 389
2355 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil

Abstract:

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Keywords: Gradient descent method, jacobian matrix.Levenberg-Marquardt algorithm, quadratic error surfaces,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474
2354 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces

Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet

Abstract:

In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.

Keywords: Dropwise condensation, textured surface, image processing, watershed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
2353 Image Processing on Geosynthetic Reinforced Layers to Evaluate Shear Strength and Variations of the Strain Profiles

Authors: S. K. Khosrowshahi, E. Güler

Abstract:

This study investigates the reinforcement function of geosynthetics on the shear strength and strain profile of sand. Conducting a series of simple shear tests, the shearing behavior of the samples under static and cyclic loads was evaluated. Three different types of geosynthetics including geotextile and geonets were used as the reinforcement materials. An image processing analysis based on the optical flow method was performed to measure the lateral displacements and estimate the shear strains. It is shown that besides improving the shear strength, the geosynthetic reinforcement leads a remarkable reduction on the shear strains. The improved layer reduces the required thickness of the soil layer to resist against shear stresses. Consequently, the geosynthetic reinforcement can be considered as a proper approach for the sustainable designs, especially in the projects with huge amount of geotechnical applications like subgrade of the pavements, roadways, and railways.

Keywords: Image processing, soil reinforcement, geosynthetics, simple shear test, shear strain profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1045
2352 Confidence Intervals for the Coefficients of Variation with Bounded Parameters

Authors: Jeerapa Sappakitkamjorn, Sa-aat Niwitpong

Abstract:

In many practical applications in various areas, such as engineering, science and social science, it is known that there exist bounds on the values of unknown parameters. For example, values of some measurements for controlling machines in an industrial process, weight or height of subjects, blood pressures of patients and retirement ages of public servants. When interval estimation is considered in a situation where the parameter to be estimated is bounded, it has been argued that the classical Neyman procedure for setting confidence intervals is unsatisfactory. This is due to the fact that the information regarding the restriction is simply ignored. It is, therefore, of significant interest to construct confidence intervals for the parameters that include the additional information on parameter values being bounded to enhance the accuracy of the interval estimation. Therefore in this paper, we propose a new confidence interval for the coefficient of variance where the population mean and standard deviation are bounded. The proposed interval is evaluated in terms of coverage probability and expected length via Monte Carlo simulation.  

Keywords: Bounded parameters, coefficient of variation, confidence interval, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4227
2351 Parallel Vector Processing Using Multi Level Orbital DATA

Authors: Nagi Mekhiel

Abstract:

Many applications use vector operations by applying single instruction to multiple data that map to different locations in conventional memory. Transferring data from memory is limited by access latency and bandwidth affecting the performance gain of vector processing. We present a memory system that makes all of its content available to processors in time so that processors need not to access the memory, we force each location to be available to all processors at a specific time. The data move in different orbits to become available to other processors in higher orbits at different time. We use this memory to apply parallel vector operations to data streams at first orbit level. Data processed in the first level move to upper orbit one data element at a time, allowing a processor in that orbit to apply another vector operation to deal with serial code limitations inherited in all parallel applications and interleaved it with lower level vector operations.

Keywords: Memory organization, parallel processors, serial code, vector processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
2350 Nonconforming Control Charts for Zero-Inflated Poisson Distribution

Authors: N. Katemee, T. Mayureesawan

Abstract:

This paper developed the c-Chart based on a Zero- Inflated Poisson (ZIP) processes that approximated by a geometric distribution with parameter p. The p estimated that fit for ZIP distribution used in calculated the mean, median, and variance of geometric distribution for constructed the c-Chart by three difference methods. For cg-Chart, developed c-Chart by used the mean and variance of the geometric distribution constructed control limits. For cmg-Chart, the mean used for constructed the control limits. The cme- Chart, developed control limits of c-Chart from median and variance values of geometric distribution. The performance of charts considered from the Average Run Length and Average Coverage Probability. We found that for an in-control process, the cg-Chart is superior for low level of mean at all level of proportion zero. For an out-of-control process, the cmg-Chart and cme-Chart are the best for mean = 2, 3 and 4 at all level of parameter.

Keywords: average coverage probability, average run length, geometric distribution, zero-inflated poisson distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411
2349 Material Parameter Identification of Modified AbdelKarim-Ohno Model

Authors: M. Cermak, T. Karasek, J. Rojicek

Abstract:

The key role in phenomenological modelling of cyclic plasticity is good understanding of stress-strain behaviour of given material. There are many models describing behaviour of materials using numerous parameters and constants. Combination of individual parameters in those material models significantly determines whether observed and predicted results are in compliance. Parameter identification techniques such as random gradient, genetic algorithm and sensitivity analysis are used for identification of parameters using numerical modelling and simulation. In this paper genetic algorithm and sensitivity analysis are used to study effect of 4 parameters of modified AbdelKarim-Ohno cyclic plasticity model. Results predicted by Finite Element (FE) simulation are compared with experimental data from biaxial ratcheting test with semi-elliptical loading path.

Keywords: Genetic algorithm, sensitivity analysis, inverse approach, finite element method, cyclic plasticity, ratcheting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2373
2348 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: Piecewise, Bayesian, reversible jump MCMC, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
2347 Empirical Survey of the Solar System Based on the Fusion of GPS and Image Processing

Authors: S. Divya Gnanarathinam, S. Sundaramurthy

Abstract:

The tremendous increase in the population of the world creates the immediate need for the energy resources. All the people in the world need the sustainable energy resources which have low costs. Solar energy is appraised as one of the main energy resources in warm countries. The areas in the west of India like Rajasthan, Gujarat, etc. are immensely rich in solar energy resources. This paper deals with the development of dual axis solar tracker using Arduino board. Depending on the astronomical estimates of the sun from the GPS and sensor image processing outcomes, a methodology is proposed to locate the position of the sun to obtain the maximum solar energy. Based on the outcomes, the solar tracking system figures out whether to use image processing outcomes or astronomical estimates to attain the maximum efficiency of the solar panel. Finally, the experimental values obtained from the solar tracker for both the sunny and the rainy days are being tabulated.

Keywords: Dual axis solar tracker, Arduino board, LDR sensors, global positioning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
2346 Automatic Segmentation of Retina Vessels by Using Zhang Method

Authors: Ehsan Saghapour, Somayeh Zandian

Abstract:

Image segmentation is an important step in image processing. Major developments in medical imaging allow physicians to use potent and non-invasive methods in order to evaluate structures, performance and to diagnose human diseases. In this study, an active contour was used to extract vessel networks from color retina images. Automatic analysis of retina vessels facilitates calculation of arterial index which is required to diagnose some certain retinopathies.

Keywords: Active contour, retinal vessel segmentation, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2374
2345 High-Speed Pipeline Implementation of Radix-2 DIF Algorithm

Authors: Christos Meletis, Paul Bougas, George Economakos , Paraskevas Kalivas, Kiamal Pekmestzi

Abstract:

In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.

Keywords: Digital signal processing, systolic circuits, FFTalgorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
2344 Non-Isothermal Kinetics of Crystallization and Phase Transformation of SiO2-Al2O3-P2O5-CaO-CaF Glass

Authors: Bogdan Il. Bogdanov, Plamen S. Pashev, Yancho H. Hristov, Dimitar P.Georgiev, Irena G. Markovska

Abstract:

The crystallization kinetics and phase transformation of SiO2.Al2O3.0,56P2O5.1,8CaO.0,56CaF2 glass have been investigated using differential thermal analysis (DTA), x-ray diffraction (XRD), and scanning electron microscopy (SEM). Glass samples were obtained by melting the glass mixture at 14500С/120 min. in platinum crucibles. The mixture were prepared from chemically pure reagents: SiO2, Al(OH)3, H3PO4, CaCO3 and CaF2. The non-isothermal kinetics of crystallization was studied by applying the DTA measurements carried out at various heating rates. The activation energies of crystallization and viscous flow were measured as 348,4 kJ.mol–1 and 479,7 kJ.mol–1 respectively. Value of Avrami parameter n ≈ 3 correspond to a three dimensional of crystal growth mechanism. The major crystalline phase determined by XRD analysis was fluorapatite (Ca(PO4)3F) and as the minor phases – fluormargarite (CaAl2(Al2SiO2)10F2) and vitlokite (Ca9P6O24). The resulting glass-ceramic has a homogeneous microstructure, composed of prismatic crystals, evenly distributed in glass phase.

Keywords: glass-ceramic, crystallization, non-isothermalkinetics, Avrami parameter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
2343 Modelling Conditional Volatility of Saving Rate by a Time-Varying Parameter Model

Authors: Katleho D. Makatjane, Kalebe M. Kalebe

Abstract:

The present paper used time-varying parameters which are based on the score function of a probability density at time t to model volatility of saving rate. We used a scaled likelihood function to update the parameters of the model overtime. Our results revealed high diligence of time-varying since the location parameter is greater than zero. Furthermore, we discovered a leptokurtic condition on saving rate’s distribution. Kapetanios, Shin-Shell Nonlinear Augmented Dickey-Fuller (KSS-NADF) test showed that the saving rate has a nonlinear unit root; therefore, it can be modeled by a generalised autoregressive score (GAS) model. Additionally, value at risk (VaR) and conditional tail expectation (CTE) indicate that 99% of the time people in Lesotho are saving more than spending. This puts the economy in high risk of not expanding. Therefore, the monetary policy committee (MPC) of Lesotho should revise their monetary policies towards this high saving rates risk.

Keywords: Generalized autoregressive score, time-varying, saving rate, Lesotho.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619
2342 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo de Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: Rainfall-runoff models, optimization procedure, automatic parameter calibration, hyperbolic smoothing method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408
2341 Osmotic Dehydration of Beetroot in Salt Solution: Optimization of Parameters through Statistical Experimental Design

Authors: P. Manivannan, M. Rajasimman

Abstract:

Response surface methodology was used for quantitative investigation of water and solids transfer during osmotic dehydration of beetroot in aqueous solution of salt. Effects of temperature (25 – 45oC), processing time (30–150 min), salt concentration (5–25%, w/w) and solution to sample ratio (5:1 – 25:1) on osmotic dehydration of beetroot were estimated. Quadratic regression equations describing the effects of these factors on the water loss and solids gain were developed. It was found that effects of temperature and salt concentrations were more significant on the water loss than the effects of processing time and solution to sample ratio. As for solids gain processing time and salt concentration were the most significant factors. The osmotic dehydration process was optimized for water loss, solute gain, and weight reduction. The optimum conditions were found to be: temperature – 35oC, processing time – 90 min, salt concentration – 14.31% and solution to sample ratio 8.5:1. At these optimum values, water loss, solid gain and weight reduction were found to be 30.86 (g/100 g initial sample), 9.43 (g/100 g initial sample) and 21.43 (g/100 g initial sample) respectively.

Keywords: Optimization, Osmotic dehydration, Beetroot, saltsolution, response surface methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3459
2340 A New Approach to Signal Processing for DC-Electromagnetic Flowmeters

Authors: Michael Schukat

Abstract:

Electromagnetic flowmeters with DC excitation are used for a wide range of fluid measurement tasks, but are rarely found in dosing applications with short measurement cycles due to the achievable accuracy. This paper will identify a number of factors that influence the accuracy of this sensor type when used for short-term measurements. Based on these results a new signal-processing algorithm will be described that overcomes the identified problems to some extend. This new method allows principally a higher accuracy of electromagnetic flowmeters with DC excitation than traditional methods.

Keywords: Electromagnetic Flowmeter, Kalman Filter, ShortMeasurement Cycles, Signal Estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
2339 Sensitivity Parameter Analysis of Negative Moment Dynamic Load Allowance of Continuous T-Girder Bridge

Authors: Fan Yang, Ye-lu Wang, Yang Zhao

Abstract:

The dynamic load allowance, as an application result of the vehicle-bridge coupled vibration theory, is an important parameter for bridge design and evaluation. Based on the coupled vehicle-bridge vibration theory, the current work establishes a full girder model of a dynamic load allowance, selects a planar five-degree-of-freedom three-axis vehicle model, solves the coupled vehicle-bridge dynamic response using the APDL language in the spatial finite element program ANSYS, selects the pivot point 2 sections as the representative of the negative moment section, and analyzes the effects of parameters such as travel speed, unevenness, vehicle frequency, span diameter, span number and forced displacement of the support on the negative moment dynamic load allowance through orthogonal tests. The influence of parameters such as vehicle speed, unevenness, vehicle frequency, span diameter, span number, and forced displacement of the support on the negative moment dynamic load allowance is analyzed by orthogonal tests, and the influence law of each influencing parameter is summarized. It is found that the effects of vehicle frequency, unevenness, and speed on the negative moment dynamic load allowance are significant, among which vehicle frequency has the greatest effect on the negative moment dynamic load allowance; the effects of span number and span diameter on the negative moment dynamic load allowance are relatively small; the effects of forced displacement of the support on the negative moment dynamic load allowance are negligible.

Keywords: Continuous T-girder bridge, dynamic load allowance, sensitivity analysis, vehicle-bridge coupling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 361
2338 Mathematical Expression for Machining Performance

Authors: Md. Ashikur Rahman Khan, M. M. Rahman

Abstract:

In electrical discharge machining (EDM), a complete and clear theory has not yet been established. The developed theory (physical models) yields results far from reality due to the complexity of the physics. It is difficult to select proper parameter settings in order to achieve better EDM performance. However, modelling can solve this critical problem concerning the parameter settings. Therefore, the purpose of the present work is to develop mathematical model to predict performance characteristics of EDM on Ti-5Al-2.5Sn titanium alloy. Response surface method (RSM) and artificial neural network (ANN) are employed to develop the mathematical models. The developed models are verified through analysis of variance (ANOVA). The ANN models are trained, tested, and validated utilizing a set of data. It is found that the developed ANN and mathematical model can predict performance of EDM effectively. Thus, the model has found a precise tool that turns EDM process cost-effective and more efficient.

Keywords: Analysis of variance, artificial neural network, material removal rate, modelling, response surface method, surface finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
2337 High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes

Authors: Caspar von Seckendorff, Eldar Sultanow

Abstract:

Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.

Keywords: Process delay, speedup, efficiency, parallel computing, data integration, E-Commerce, Amazon Elastic Compute Cloud (EC2), Hadoop, Nutch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
2336 High Level Synthesis of Digital Filters Based On Sub-Token Forwarding

Authors: Iyad F. Jafar, Sandra J. Alrawashdeh, Ban K. Alhamayel

Abstract:

High level synthesis (HLS) is a process which generates register-transfer level design for digital systems from behavioral description. There are many HLS algorithms and commercial tools. However, most of these algorithms consider a behavioral description for the system when a single token is presented to the system. This approach does not exploit extra hardware efficiently, especially in the design of digital filters where common operations may exist between successive tokens. In this paper, we modify the behavioral description to process multiple tokens in parallel. However, this approach is unlike the full processing that requires full hardware replication. It exploits the presence of common operations between successive tokens. The performance of the proposed approach is better than sequential processing and approaches that of full parallel processing as the hardware resources are increased.

Keywords: Digital filters, High level synthesis, Sub-token forwarding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
2335 An Efficient Implementation of High Speed Vedic Multiplier Using Compressors for Image Processing Applications

Authors: Shobha Sharma, Amita Dev, Akanksha Kant

Abstract:

Digital signal processor, image signal processor and FIR filters have multipliers as an important part of their design. On the basis of Vedic mathematics, Vedic multipliers have come out to be very fast multipliers. One of the image processing applications is edge detection. This research presents a small area and high speed 8 bit Vedic multiplier system comprising of compressor based adders. This results in faster edge detection. This architecture is tested on Xilinx vertex 4 FPGA board and simulations were carried out using the Xilinx synthesis tool. Comparisons are made and this system is found to be smaller in area with high speed (the lesser propagation delay). This compressor based Vedic multiplier is 1.1 times speedier than a typical Vedic multiplier. Also, this Vedic Multiplier is 2 times speedier than a ‘simple’ multiplier.

Keywords: Detection of edges, Vedic multiplier, image processing, Urdhva Tiryakbhyam sutra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
2334 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: Semantic links, data mining, linked data, SKOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
2333 Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

Authors: Carolina B. Baguio

Abstract:

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Keywords: Adaptive robust estimate, asymptotic efficiency, breakdown point, influence function, L-estimates, location parameter, tail length, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073