Search results for: polynomial quintic spline
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 293

Search results for: polynomial quintic spline

23 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

Keywords: Level crossing sampling, numerical stability, speech processing, trigonometric polynomial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
22 Fish Locomotion for Innovative Marine Propulsion Systems

Authors: Omar B. Yaakob, Yasser M. Ahmed, Ahmad F. Said

Abstract:

There is an essential need for obtaining the mathematical representation of fish body undulations, which can be used for designing and building new innovative types of marine propulsion systems with less environmental impact. This research work presents a case study to derive the mathematical model for fish body movement. Observation and capturing image methods were used in this study in order to obtain a mathematical representation of Clariasbatrachus fish (catfish). An experiment was conducted by using an aquarium with dimension 0.609 m x 0.304 m x 0.304 m, and a 0.5 m ruler was attached at the base of the aquarium. Progressive Scan Monochrome Camera was positioned at 1.8 m above the base of the aquarium to provide swimming sequences. Seven points were marked on the fish body using white marker to indicate the fish movement and measuring the amplitude of undulation. Images from video recordings (20 frames/s) were analyzed frame by frame using local coordinate system, with time interval 0.05 s. The amplitudes of undulations were obtained for image analysis from each point that has been marked on fish body. A graph of amplitude of undulations versus time was plotted by using computer to derive a mathematical fit. The function for the graph is polynomial with nine orders.

Keywords: Fish locomotion, body undulation, steady and unsteady swimming modes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
21 Nonlinear Model Predictive Control for Solid Oxide Fuel Cell System Based On Wiener Model

Authors: T. H. Lee, J. H. Park, S. M. Lee, S. C. Lee

Abstract:

In this paper, we consider Wiener nonlinear model for solid oxide fuel cell (SOFC). The Wiener model of the SOFC consists of a linear dynamic block and a static output non-linearity followed by the block, in which linear part is approximated by state-space model and the nonlinear part is identified by a polynomial form. To control the SOFC system, we have to consider various view points such as operating conditions, another constraint conditions, change of load current and so on. A change of load current is the significant one of these for good performance of the SOFC system. In order to keep the constant stack terminal voltage by changing load current, the nonlinear model predictive control (MPC) is proposed in this paper. After primary control method is designed to guarantee the fuel utilization as a proper constant, a nonlinear model predictive control based on the Wiener model is developed to control the stack terminal voltage of the SOFC system. Simulation results verify the possibility of the proposed Wiener model and MPC method to control of SOFC system.

Keywords: SOFC, model predictive control, Wiener model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
20 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, L-stable methods, pricing European options, Jump–diffusion model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 499
19 Study on Optimization Design of Pressure Hull for Underwater Vehicle

Authors: Qasim Idrees, Gao Liangtian, Liu Bo, Miao Yiran

Abstract:

In order to improve the efficiency and accuracy of the pressure hull structure, optimization of underwater vehicle based on response surface methodology, a method for optimizing the design of pressure hull structure was studied. To determine the pressure shell of five dimensions as a design variable, the application of thin shell theory and the Chinese Classification Society (CCS) specification was carried on the preliminary design. In order to optimize variables of the feasible region, different methods were studied and implemented such as Opt LHD method (to determine the design test sample points in the feasible domain space), parametric ABAQUS solution for each sample point response, and the two-order polynomial response for the surface model of the limit load of structures. Based on the ultimate load of the structure and the quality of the shell, the two-generation genetic algorithm was used to solve the response surface, and the Pareto optimal solution set was obtained. The final optimization result was 41.68% higher than that of the initial design, and the shell quality was reduced by about 27.26%. The parametric method can ensure the accuracy of the test and improve the efficiency of optimization.

Keywords: Parameterization, response surface, structure optimization, pressure hull.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
18 High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Authors: Faheem Ahmed, Fareed Ahmed, Yongheng Guo, Yong Yang

Abstract:

This paper deals with a high-order accurate Runge Kutta Discontinuous Galerkin (RKDG) method for the numerical solution of the wave equation, which is one of the simple case of a linear hyperbolic partial differential equation. Nodal DG method is used for a finite element space discretization in 'x' by discontinuous approximations. This method combines mainly two key ideas which are based on the finite volume and finite element methods. The physics of wave propagation being accounted for by means of Riemann problems and accuracy is obtained by means of high-order polynomial approximations within the elements. High order accurate Low Storage Explicit Runge Kutta (LSERK) method is used for temporal discretization in 't' that allows the method to be nonlinearly stable regardless of its accuracy. The resulting RKDG methods are stable and high-order accurate. The L1 ,L2 and L∞ error norm analysis shows that the scheme is highly accurate and effective. Hence, the method is well suited to achieve high order accurate solution for the scalar wave equation and other hyperbolic equations.

Keywords: Nodal Discontinuous Galerkin Method, RKDG, Scalar Wave Equation, LSERK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2467
17 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: Curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 512
16 Siding Mode Control of Pitch-Rate of an F-16 Aircraft

Authors: Ekprasit Promtun, Sridhar Seshagiri

Abstract:

This paper considers the control of the longitudinal flight dynamics of an F-16 aircraft. The primary design objective is model-following of the pitch rate q, which is the preferred system for aircraft approach and landing. Regulation of the aircraft velocity V (or the Mach-hold autopilot) is also considered, but as a secondary objective. The problem is challenging because the system is nonlinear, and also non-affine in the input. A sliding mode controller is designed for the pitch rate, that exploits the modal decomposition of the linearized dynamics into its short-period and phugoid approximations. The inherent robustness of the SMC design provides a convenient way to design controllers without gain scheduling, with a steady-state response that is comparable to that of a conventional polynomial based gain-scheduled approach with integral control, but with improved transient performance. Integral action is introduced in the sliding mode design using the recently developed technique of “conditional integrators", and it is shown that robust regulation is achieved with asymptotically constant exogenous signals, without degrading the transient response. Through extensive simulation on the nonlinear multiple-input multiple-output (MIMO) longitudinal model of the F-16 aircraft, it is shown that the conditional integrator design outperforms the one based on the conventional linear control, without requiring any scheduling.

Keywords: Sliding-mode Control, Integral Control, Model Following, F-16 Longitudinal Dynamics, Pitch-Rate Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3220
15 Investigating Polynomial Interpolation Functions for Zooming Low Resolution Digital Medical Images

Authors: Maninder Pal

Abstract:

Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.

Keywords: Zooming, interpolation, medical images, resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
14 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
13 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
12 Statistical Screening of Medium Components on Ethanol Production from Cashew Apple Juice using Saccharomyces diasticus

Authors: Karuppaiya Maruthai, Viruthagiri Thangavelu, Manikandan Kanagasabai

Abstract:

In the present study, effect of critical medium components (a total of fifteen components) on ethanol production from waste cashew apple juice (CAJ) using yeast Saccharomyces diasticus was studied. A statistical response surface methodology (RSM) based Plackett-Burman Design (PBD) was used for the design of experiments. The design contains a total of 32 experimental trails. The effect of medium components on ethanol was studied at two different levels such as low concentration level (-) and high concentration levels (+). The dependent variables selected in this study were ethanol concentration (g/L) and cellmass concentration (g/L). Data obtained from RSM on ethanol production were subjected to analysis of variance (ANOVA). In general, initial substrate concentration significantly influenced the microbial growth and product formation. Of the medium components evaluated, CAJ concentration, yeast extract, (NH4)2SO4, and malt extract showed significant effect on ethanol fermentation. A second-order polynomial model was used to predict the experimental data and the model fitted the data with a high correlation coefficient (R2 > 0.98). Maximum ethanol (15.3 g/L) and biomass (6.4 g/L) concentrations were obtained at the optimum medium composition and at optimum condition (temperature-30°C; initial pH-6.8) after 72 h fermentation using S.diasticus.

Keywords: cashew apple juice, ethanol, fermentation, yeast, response surface methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2712
11 Preparation of Corn Flour Based Extruded Product and Evaluate Its Physical Characteristics

Authors: C. S. Saini

Abstract:

The composite flour blend consisting of corn, pearl millet, black gram and wheat bran in the ratio of 80:5:10:5 was taken to prepare the extruded product and their effect on physical properties of extrudate was studied. The extrusion process was conducted in laboratory by using twin screw extruder. The physical characteristics evaluated include lateral expansion, bulk density, water absorption index, water solubility index, and rehydration ratio and moisture retention. The Central Composite Rotatable Design (CCRD) was used to decide the level of processing variables i.e. feed moisture content (%), screw speed (rpm), and barrel temperature (oC) for the experiment. The data obtained after extrusion process were analyzed by using response surface methodology. A second order polynomial model for the dependent variables was established to fit the experimental data. The numerical optimization studies resulted in 127°C of barrel temperature, 246 rpm of screw speed, and 14.5% of feed moisture as optimum variables to produce acceptable extruded product. The responses predicted by the software for the optimum process condition resulted in lateral expansion 126%, bulk density 0.28 g/cm3, water absorption index 4.10 g/g, water solubility index 39.90%, rehydration ratio 544% and moisture retention 11.90% with 75% desirability.

Keywords: Black gram, corn flour, extrusion, physical characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3311
10 Conventional and PSO Based Approaches for Model Reduction of SISO Discrete Systems

Authors: S. K. Tomar, R. Prasad, S. Panda, C. Ardil

Abstract:

Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Keywords: Discrete System, Single Input Single Output (SISO), Bilinear Transformation, Reduced Order Model, Modified CauerForm, Polynomial Differentiation, Particle Swarm Optimization, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943
9 Simultaneous Saccharification and Fermentation(SSF) of Sugarcane Bagasse - Kinetics and Modeling

Authors: E.Sasikumar, T.Viruthagiri

Abstract:

Simultaneous Saccharification and Fermentation (SSF) of sugarcane bagasse by cellulase and Pachysolen tannophilus MTCC *1077 were investigated in the present study. Important process variables for ethanol production form pretreated bagasse were optimized using Response Surface Methodology (RSM) based on central composite design (CCD) experiments. A 23 five level CCD experiments with central and axial points was used to develop a statistical model for the optimization of process variables such as incubation temperature (25–45°) X1, pH (5.0–7.0) X2 and fermentation time (24–120 h) X3. Data obtained from RSM on ethanol production were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation and contour plots were used to study the interactions among three relevant variables of the fermentation process. The fermentation experiments were carried out using an online monitored modular fermenter 2L capacity. The processing parameters setup for reaching a maximum response for ethanol production was obtained when applying the optimum values for temperature (32°C), pH (5.6) and fermentation time (110 h). Maximum ethanol concentration (3.36 g/l) was obtained from 50 g/l pretreated sugarcane bagasse at the optimized process conditions in aerobic batch fermentation. Kinetic models such as Monod, Modified Logistic model, Modified Logistic incorporated Leudeking – Piret model and Modified Logistic incorporated Modified Leudeking – Piret model have been evaluated and the constants were predicted.

Keywords: Sugarcane bagasse, ethanol, optimization, Pachysolen tannophilus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
8 Simulating Dynamics of Thoracolumbar Spine Derived from Life MOD under Haptic Forces

Authors: K. T. Huynh, I. Gibson, W. F. Lu, B. N. Jagdish

Abstract:

In this paper, the construction of a detailed spine model is presented using the LifeMOD Biomechanics Modeler. The detailed spine model is obtained by refining spine segments in cervical, thoracic and lumbar regions into individual vertebra segments, using bushing elements representing the intervertebral discs, and building various ligamentous soft tissues between vertebrae. In the sagittal plane of the spine, constant force will be applied from the posterior to anterior during simulation to determine dynamic characteristics of the spine. The force magnitude is gradually increased in subsequent simulations. Based on these recorded dynamic properties, graphs of displacement-force relationships will be established in terms of polynomial functions by using the least-squares method and imported into a haptic integrated graphic environment. A thoracolumbar spine model with complex geometry of vertebrae, which is digitized from a resin spine prototype, will be utilized in this environment. By using the haptic technique, surgeons can touch as well as apply forces to the spine model through haptic devices to observe the locomotion of the spine which is computed from the displacement-force relationship graphs. This current study provides a preliminary picture of our ongoing work towards building and simulating bio-fidelity scoliotic spine models in a haptic integrated graphic environment whose dynamic properties are obtained from LifeMOD. These models can be helpful for surgeons to examine kinematic behaviors of scoliotic spines and to propose possible surgical plans before spine correction operations.

Keywords: Haptic interface, LifeMOD, spine modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
7 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
6 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961
5 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
4 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.

Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
3 Twin-Screw Extruder and Effective Parameters on the HDPE Extrusion Process

Authors: S. A. Razavi Alavi, M. Torabi Angaji, Z. Gholami

Abstract:

In the process of polyethylene extrusion polymer material similar to powder or granule is under compression, melting and transmission operation and on base of special form, extrudate has been produced. Twin-screw extruders are applicable in industries because of their high capacity. The powder mixing with chemical additives and melting with thermal and mechanical energy in three zones (feed, compression and metering zone) and because of gear pump and screw's pressure, converting to final product in latest plate. Extruders with twin-screw and short distance between screws are better than other types because of their high capacity and good thermal and mechanical stress. In this paper, process of polyethylene extrusion and various tapes of extruders are studied. It is necessary to have an exact control on process to producing high quality products with safe operation and optimum energy consumption. The granule size is depending on granulator motor speed. Results show at constant feed rate a decrease in granule size was found whit Increase in motor speed. Relationships between HDPE feed rate and speed of granulator motor, main motor and gear pump are calculated following as: x = HDPE feed flow rate, yM = Main motor speed yM = (-3.6076e-3) x^4+ (0.24597) x^3+ (-5.49003) x^2+ (64.22092) x+61.66786 (1) x = HDPE feed flow rate, yG = Gear pump speed yG = (-2.4996e-3) x^4+ (0.18018) x^3+ (-4.22794) x^2+ (48.45536) x+18.78880 (2) x = HDPE feed flow rate, y = Granulator motor speed 10th Degree Polynomial Fit: y = a+bx+cx^2+dx^3... (3) a = 1.2751, b = 282.4655, c = -165.2098, d = 48.3106, e = -8.18715, f = 0.84997 g = -0.056094, h = 0.002358, i = -6.11816e-5 j = 8.919726e-7, k = -5.59050e-9

Keywords: Extrusion, Extruder, Granule, HDPE, Polymer, Twin-Screw extruder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4979
2 Implementing an Intuitive Reasoner with a Large Weather Database

Authors: Yung-Chien Sun, O. Grant Clark

Abstract:

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
1 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691