Search results for: Minimization of Moment Ratio Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9714

Search results for: Minimization of Moment Ratio Method

9354 Reliable Capacitated Facility Location Problem Considering Maximal Covering

Authors: Mehdi Seifbarghy, Sajjad Jalali, Seyed Habib A. Rahmati

Abstract:

This paper provides a framework in order to incorporate reliability issue as a sign of disruption in distribution systems and partial covering theory as a response to limitation in coverage radios and economical preferences, simultaneously into the traditional literatures of capacitated facility location problems. As a result we develop a bi-objective model based on the discrete scenarios for expected cost minimization and demands coverage maximization through a three echelon supply chain network by facilitating multi-capacity levels for provider side layers and imposing gradual coverage function for distribution centers (DCs). Additionally, in spite of objectives aggregation for solving the model through LINGO software, a branch of LP-Metric method called Min- Max approach is proposed and different aspects of corresponds model will be explored.

Keywords: Reliability Cost, Partial Covering, LP-Metric

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
9353 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders

Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai

Abstract:

Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.

Keywords: Critical moment, FE modeling, lateral torsional buckling, trapezoidally corrugated web girders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1152
9352 Parametric Analysis on Hydrogen Production using Mixtures of Pure Cellulosic and Calcium Oxide

Authors: N.A. Rashidi, S. Yusup, M.M. Ahmad

Abstract:

As the fossil fuels kept on depleting, intense research in developing hydrogen (H2) as the alternative fuel has been done to cater our tremendous demand for fuel. The potential of H2 as the ultimate clean fuel differs with the fossil fuel that releases significant amounts of carbon dioxide (CO2) into the surrounding and leads to the global warming. The experimental work was carried out to study the production of H2 from palm kernel shell steam gasification at different variables such as heating rate, steam to biomass ratio and adsorbent to biomass ratio. Maximum H2 composition which is 61% (volume basis) was obtained at heating rate of 100oCmin-1, steam/biomass of 2:1 ratio, and adsorbent/biomass of 1:1 ratio. The commercial adsorbent had been modified by utilizing the alcoholwater mixture. Characteristics of both adsorbents were investigated and it is concluded that flowability and floodability of modified CaO is significantly improved.

Keywords: Biomass gasification, Calcium oxide, Carbon dioxide capture, Sorbent flowability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
9351 Effect of Equivalence Ratio on Performance of Fluidized Bed Gasifier Run with Sized Biomass

Authors: J. P. Makwana, A. K. Joshi, Rajesh N. Patel, Darshil Patel

Abstract:

Recently, fluidized bed gasification becomes an attractive technology for power generation due to its higher efficiency. The main objective pursued in this work is to investigate the producer gas production potential from sized biomass (sawdust and pigeon pea) by applying the air gasification technique. The size of the biomass selected for the study was in the range of 0.40-0.84 mm. An experimental study was conducted using a fluidized bed gasifier with 210 mm diameter and 1600 mm height. During the experiments, the fuel properties and the effects of operating parameters such as gasification temperatures 700 to 900 °C, equivalence ratio 0.16 to 0.46 were studied. It was concluded that substantial amounts of producer gas (up to 1110 kcal/m3) could be produced utilizing biomass such as sawdust and pigeon pea by applying this fluidization technique. For both samples, the rise of temperature till 900 °C and equivalence ratio of 0.4 favored further gasification reactions and resulted into producer gas with calorific value 1110 kcal/m3.

Keywords: Sized biomass, fluidized bed gasifier, equivalence ratio, temperature profile, gas composition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
9350 Anticipation of Bending Reinforcement Based on Iranian Concrete Code Using Meta-Heuristic Tools

Authors: Seyed Sadegh Naseralavi, Najmeh Bemani

Abstract:

In this paper, different concrete codes including America, New Zealand, Mexico, Italy, India, Canada, Hong Kong, Euro Code and Britain are compared with the Iranian concrete design code. First, by using Adaptive Neuro Fuzzy Inference System (ANFIS), the codes having the most correlation with the Iranian ninth issue of the national regulation are determined. Consequently, two anticipated methods are used for comparing the codes: Artificial Neural Network (ANN) and Multi-variable regression. The results show that ANN performs better. Predicting is done by using only tensile steel ratio and with ignoring the compression steel ratio.

Keywords: Concrete design code, anticipate method, artificial neural network, multi-variable regression, adaptive neuro fuzzy inference system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
9349 Flow around Two Cam Shaped Cylinders in Tandem Arrangement

Authors: Arash Mir Abdolah Lavasani, Hamidreza Bayat

Abstract:

In this paper flow around two cam shaped cylinders had been studied numerically. The equivalent diameter of cylinders is 27.6 mm. The space between center to center of two cam shaped cylinders is define as longitudinal pitch ratio and it varies in range of 2 varies in range of 50 both cylinders depends on pitch ratio. However drag coefficient of downstream cylinder is more dependent on the pitch ratio.

Keywords: Cam shaped, tandem cylinders, numerical, drag coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
9348 Evaluating Portfolio Performance by Highlighting Network Property and the Sharpe Ratio in the Stock Market

Authors: Zahra Hatami, Hesham Ali, David Volkman

Abstract:

Selecting a portfolio for investing is a crucial decision for individuals and legal entities. In the last two decades, with economic globalization, a stream of financial innovations has rushed to the aid of financial institutions. The importance of selecting stocks for the portfolio is always a challenging task for investors. This study aims to create a financial network to identify optimal portfolios using network centralities metrics. This research presents a community detection technique of superior stocks that can be described as an optimal stock portfolio to be used by investors. By using the advantages of a network and its property in extracted communities, a group of stocks was selected for each of the various time periods. The performance of the optimal portfolios was compared to the famous index. Their Sharpe ratio was calculated in a timely manner to evaluate their profit for making decisions. The analysis shows that the selected potential portfolio from stocks with low centrality measurement can outperform the market; however, they have a lower Sharpe ratio than stocks with high centrality scores. In other words, stocks with low centralities could outperform the S&P500 yet have a lower Sharpe ratio than high central stocks.

Keywords: Portfolio management performance, network analysis, centrality measurements, Sharpe ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338
9347 Adaptive Gait Pattern Generation of Biped Robot based on Human's Gait Pattern Analysis

Authors: Seungsuk Ha, Youngjoon Han, Hernsoo Hahn

Abstract:

This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained.

Keywords: Biped robot, gait pattern, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
9346 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods

Authors: C. Kalamani, K. Paramasivam

Abstract:

In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.

Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
9345 Numerical Simulations of Cross-Flow around Four Square Cylinders in an In-Line Rectangular Configuration

Authors: Shams Ul Islam, Chao Ying Zhou, Farooq Ahmad

Abstract:

A two-dimensional numerical simulation of crossflow around four cylinders in an in-line rectangular configuration is studied by using the lattice Boltzmann method (LBM). Special attention is paid to the effect of the spacing between the cylinders. The Reynolds number ( Re ) is chosen to be e 100 R = and the spacing ratio L / D is set at 0.5, 1.5, 2.5, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 and 10.0. Results show that, as in the case of four cylinders in an inline rectangular configuration , flow fields show four different features depending on the spacing (single square cylinder, stable shielding flow, wiggling shielding flow and a vortex shedding flow) are observed in this study. The effects of spacing ratio on physical quantities such as mean drag coefficient, Strouhal number and rootmean- square value of the drag and lift coefficients are also presented. There is more than one shedding frequency at small spacing ratios. The mean drag coefficients for downstream cylinders are less than that of the single cylinder for all spacing ratios. The present results using the LBM are compared with some existing experimental data and numerical studies. The comparison shows that the LBM can capture the characteristics of the bluff body flow reasonably well and is a good tool for bluff body flow studies.

Keywords: Four square cylinders, Lattice Boltzmann method, rectangular configuration, spacing ratios, vortex shedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
9344 An Implementation of Fuzzy Logic Technique for Prediction of the Power Transformer Faults

Authors: Omar M. Elmabrouk., Roaa Y. Taha., Najat M. Ebrahim, Sabbreen A. Mohammed

Abstract:

Power transformers are the most crucial part of power electrical system, distribution and transmission grid. This part is maintained using predictive or condition-based maintenance approach. The diagnosis of power transformer condition is performed based on Dissolved Gas Analysis (DGA). There are five main methods utilized for analyzing these gases. These methods are International Electrotechnical Commission (IEC) gas ratio, Key Gas, Roger gas ratio, Doernenburg, and Duval Triangle. Moreover, due to the importance of the transformers, there is a need for an accurate technique to diagnose and hence predict the transformer condition. The main objective of this technique is to avoid the transformer faults and hence to maintain the power electrical system, distribution and transmission grid. In this paper, the DGA was utilized based on the data collected from the transformer records available in the General Electricity Company of Libya (GECOL) which is located in Benghazi-Libya. The Fuzzy Logic (FL) technique was implemented as a diagnostic approach based on IEC gas ratio method. The FL technique gave better results and approved to be used as an accurate prediction technique for power transformer faults. Also, this technique is approved to be a quite interesting for the readers and the concern researchers in the area of FL mathematics and power transformer.

Keywords: Fuzzy logic, dissolved gas-in-oil analysis, DGA, prediction, power transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
9343 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

Authors: Padmanabhan Balasubramanian, C. Hari Narayanan

Abstract:

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.

Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
9342 On 6-Figures in Finite Klingenberg Planes of Parameters (p2k-1, p)

Authors: Atilla Akpinar, Basri Celik, Suleyman Ciftci

Abstract:

In this paper, we deal with finite projective Klingenberg plane M (A) coordinatized by local ring A := Zq+Zq E (where prime power q = p', e0 Z q and 62 = 0). So, we get some combinatorical results on 6-figures. For example, we show that there exist p — 1 6-figure classes in M(A).

Keywords: finite Klingenberg plane, 6-figure, ratio of 6-figure, cross-ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
9341 Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement

Authors: S. Homrossukon, A. Anurathapunt

Abstract:

This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.

Keywords: Six Sigma Solutions, Process Improvement, QualityManagement, Benefit Cost Ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097
9340 Relation between Properties of Internally Cured Concrete and Water Cement Ratio

Authors: T. Manzur, S. Iffat, M. A. Noor

Abstract:

In this paper, relationship between different properties of IC concrete and water cement ratio, obtained from a comprehensive experiment conducted on IC using local materials (Burnt clay chips- BC) is presented. In addition, saturated SAP was used as an IC material in some cases. Relationships have been developed through regression analysis. The focus of this analysis is on developing relationship between a dependent variable and an independent variable. Different percent replacements of BC and water cement ratios were used. Compressive strength, modulus of elasticity, water permeability and chloride permeability were tested and variations of these parameters were analyzed with respect to water cement ratio.

Keywords: Compressive strength, concrete, curing, lightweight, aggregate, superabsorbent polymer, internal curing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
9339 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: Channel estimation, inter-cell interference, pilot contamination attacks, wireless communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 640
9338 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2688
9337 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel

Authors: J. Daba, J. Dubois

Abstract:

In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.

Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
9336 Interactive Compromise Approach with Particle Swarm Optimization for Environmental/Economic Power Dispatch

Authors: Ming-Tang Tsai, Chih-Wei Yen

Abstract:

In this paper, an Interactive Compromise Approach with Particle Swarm Optimization(ICA-PSO) is presented to solve the Economic Emission Dispatch(EED) problem. The cost function and emission function are modeled as the nonsmooth functions, respectively. The bi-objective including both the minimization of cost and emission is formulated in this paper. ICA-PSO is proposed to solve EED problem for finding a better compromise solution. The solution methodology can offer a global or near-global solution for decision-making requirements. The effectiveness and efficiency of ICA-PSO are demonstrated by a sample test system. Test results can be shown that the proposed method provide a practical and flexible framework for power dispatch.

Keywords: Interactive Compromise Approach, Emission Control, Economic Dispatch, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
9335 Multiwavelet and Biological Signal Processing

Authors: Morteza Moazami-Goudarzi, Ali Taheri, Mohammad Pooyan, Reza Mahboobi

Abstract:

In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.

Keywords: ECG compression, Prefiltering, Cardinal Balanced Multiwavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
9334 Flow-Through Supercritical Installation for Producing Biodiesel Fuel

Authors: Y. A. Shapovalov, F. M. Gumerov, M. K. Nauryzbaev, S. V. Mazanov, R. A. Usmanov, A. V. Klinov, L. K. Safiullina, S. A. Soshin

Abstract:

A flow-through installation was created and manufactured for the transesterification of triglycerides of fatty acids and production of biodiesel fuel under supercritical fluid conditions. Transesterification of rapeseed oil with ethanol was carried out according to two parameters: temperature and the ratio of alcohol/oil mixture at the constant pressure of 19 MPa. The kinetics of the yield of fatty acids ethyl esters (FAEE) was determined in the temperature range of 320-380 °C at the alcohol/oil molar ratio of 6:1-20:1. The content of the formed FAEE was determined by the method of correlation of the resulting biodiesel fuel by its kinematic viscosity. The maximum FAEE yield (about 90%) was obtained within 30 min at the ethanol/oil molar ratio of 12:1 and a temperature of 380 °C. When studying of transesterification of triglycerides, a kinetic model of an isothermal flow reactor was used. The reaction order implemented in the flow reactor has been determined. The first order of the reaction was confirmed by data on the conversion of FAEE during the reaction at different temperatures and the molar ratios of the initial reagents (ethanol/oil). Using the Arrhenius equation, the values of the effective constants of the transesterification reaction rate were calculated at different reaction temperatures. In addition, based on the experimental data, the activation energy and the pre-exponential factor of the transesterification reaction were determined.

Keywords: Biodiesel, fatty acid esters, supercritical fluid technology, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364
9333 PAPR Reduction Method for OFDM Signalby Using Dummy Sub-carriers

Authors: Pisit Boonsrimuang, Arjin Numsomran, Tawil Paungma, Hideo Kobayashi

Abstract:

One of the disadvantages of using OFDM is the larger peak to averaged power ratio (PAPR) in its time domain signal. The larger PAPR signal would course the fatal degradation of bit error rate performance (BER) due to the inter-modulation noise in the nonlinear channel. This paper proposes an improved DSI (Dummy Sequence Insertion) method, which can achieve the better PAPR and BER performances. The feature of proposed method is to optimize the phase of each dummy sub-carrier so as to reduce the PAPR performance by changing all predetermined phase coefficients in the time domain signal, which is calculated for data sub-carriers and dummy sub-carriers separately. To achieve the better PAPR performance, this paper also proposes to employ the time-frequency domain swapping algorithm for fine adjustment of phase coefficient of the dummy subcarriers, which can achieve the less complexity of processing and achieves the better PAPR and BER performances than those for the conventional DSI method. This paper presents various computer simulation results to verify the effectiveness of proposed method as comparing with the conventional methods in the non-linear channel.

Keywords: OFDM, PAPR, dummy sub-carriers, non-linear

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
9332 2D Validation of a High-order Adaptive Cartesian-grid finite-volume Characteristic- flux Model with Embedded Boundaries

Authors: C. Leroy, G. Oger, D. Le Touzé, B. Alessandrini

Abstract:

A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.

Keywords: Finite volume method, cartesian grid, compressible solver, complex geometries, Paramesh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
9331 Hardware Centric Machine Vision for High Precision Center of Gravity Calculation

Authors: Xin Cheng, Benny Thörnberg, Abdul Waheed Malik, Najeem Lawal

Abstract:

We present a hardware oriented method for real-time measurements of object-s position in video. The targeted application area is light spots used as references for robotic navigation. Different algorithms for dynamic thresholding are explored in combination with component labeling and Center Of Gravity (COG) for highest possible precision versus Signal-to-Noise Ratio (SNR). This method was developed with a low hardware cost in focus having only one convolution operation required for preprocessing of data.

Keywords: Dynamic thresholding, segmentation, position measurement, sub-pixel precision, center of gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2317
9330 Signal-to-Noise Ratio Improvement of EMCCD Cameras

Authors: Wen W. Zhang, Qian Chen, Bei B. Zhou, Wei J. He

Abstract:

Over the past years, the EMCCD has had a profound influence on photon starved imaging applications relying on its unique multiplication register based on the impact ionization effect in the silicon. High signal-to-noise ratio (SNR) means high image quality. Thus, SNR improvement is important for the EMCCD. This work analyzes the SNR performance of an EMCCD with gain off and on. In each mode, simplified SNR models are established for different integration times. The SNR curves are divided into readout noise (or CIC) region and shot noise region by integration time. Theoretical SNR values comparing long frame integration and frame adding in each region are presented and discussed to figure out which method is more effective. In order to further improve the SNR performance, pixel binning is introduced into the EMCCD. The results show that pixel binning does obviously improve the SNR performance, but at the expensive of the spatial resolution.

Keywords: EMCCD, SNR improvement, pixel binning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821
9329 Studies on Various Parameters Involved in Conjugation of Starch with Lysine for Excellent Emulsification Properties Using Response Surface Methodology

Authors: Sourish Bhattacharya, Priyanka Singh

Abstract:

The process parameters, starch-water ratio (A, (w/v) %), pH of suspension (B), Temperature(C, °C) and Time (D, hrs.)., were optimized for the preparation of starch-lysine conjugate and studying their effect on stability of emulsions by calculating emulsion stability index using response surface methodology. The optimized conditions are pH 9.0, temperature 60oC, reaction time 6 hrs, starch:water ratio 1:2.5, having emulsion stability index was 0.72.

Keywords: Emulsion stability index, pH of suspension, Starch-water ratio, Temperature, Time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
9328 Investigation of the Effect of Number of Story on Different Structural Components of RC Building

Authors: Zasiah Tafheem, Mahadee Hasan Shourav, Zahidul Islam, Saima Islam Tumpa

Abstract:

The paper aims at investigating the effect of number of story on different structural components of reinforced concrete building due to gravity and lateral loading. For the study, three building models having same building plan of three, six and nine stories are analyzed and designed using software package. All the buildings are residential and are located in Dhaka city of Bangladesh. Lateral load including wind and earthquake loading are applied to the building along both longitudinal and transverse direction as per Bangladesh National Building Code (BNBC, 2006). Equivalent static force method is followed for the applied seismic loading. The present study investigates as well as compares mainly total steel requirement in different structural components for those buildings. It has been found that total longitudinal steel requirement for beams at each floor is 48.57% for three storied building, 61.36% for six storied building when the total percentage is taken as 100% in case of nine storied building. For an exterior column, the steel ratio is 2.1%, 3.06%, 4.55% for three, six and nine storied building respectively for the first three floors. In addition, it has been noted that total weight of longitudinal reinforcement of an interior column is 14.02 % for threestoried building and 43.12% for six storied building when the total reinforcement is considered 100% for nine storied building for the first three floors.

Keywords: Equivalent Static Force Method, longitudinal reinforcement, seismic loading, steel ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
9327 An Experimental Study on the Effect of Premixed and Equivalence Ratios on CO and HC Emissions of Dual Fuel HCCI Engine

Authors: M. Ghazikhani, M. R. Kalateh, Y. K. Toroghi, M. Dehnavi

Abstract:

In this study, effects of premixed and equivalence ratios on CO and HC emissions of a dual fuel HCCI engine are investigated. Tests were conducted on a single-cylinder engine with compression ratio of 17.5. Premixed gasoline is provided by a carburetor connected to intake manifold and equipped with a screw to adjust premixed air-fuel ratio, and diesel fuel is injected directly into the cylinder through an injector at pressure of 250 bars. A heater placed at inlet manifold is used to control the intake charge temperature. Optimal intake charge temperature results in better HCCI combustion due to formation of a homogeneous mixture, therefore, all tests were carried out over the optimum intake temperature of 110-115 ºC. Timing of diesel fuel injection has a great effect on stratification of in-cylinder charge and plays an important role in HCCI combustion phasing. Experiments indicated 35 BTDC as the optimum injection timing. Varying the coolant temperature in a range of 40 to 70 ºC, better HCCI combustion was achieved at 50 ºC. Therefore, coolant temperature was maintained 50 ºC during all tests. Simultaneous investigation of effective parameters on HCCI combustion was conducted to determine optimum parameters resulting in fast transition to HCCI combustion. One of the advantages of the method studied in this study is feasibility of easy and fast transition of typical diesel engine to a dual fuel HCCI engine. Results show that increasing premixed ratio, while keeping EGR rate constant, increases unburned hydrocarbon (UHC) emissions due to quenching phenomena and trapping of premixed fuel in crevices, but CO emission decreases due to increase in CO to CO2 reactions.

Keywords: Dual fuel HCCI engine, premixed ratio, equivalenceratio, CO and UHC emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
9326 Analysis of Flow in Cylindrical Mixing Chamber

Authors: Václav Dvořák

Abstract:

The article deals with numerical investigation of axisymmetric subsonic air to air ejector. An analysis of flow and mixing processes in cylindrical mixing chamber are made. Several modes with different velocity and ejection ratio are presented. The mixing processes are described and differences between flow in the initial region of mixing and the main region of mixing are described. The lengths of both regions are evaluated. Transition point and point where the mixing processes are finished are identified. It was found that the length of the initial region of mixing is strongly dependent on the velocity ratio, while the length of the main region of mixing is dependent on velocity ratio only slightly.

Keywords: Air ejector, mixing chamber, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3932
9325 Calculation Analysis of an Axial Compressor Supersonic Stage Impeller

Authors: Y. B. Galerkin, E. Y. Popova, K. V. Soldatova

Abstract:

There is an evident trend to elevate pressure ratio of a single stage of a turbo compressors - axial compressors in particular. Whilst there was an opinion recently that a pressure ratio 1,9 was a reasonable limit, later appeared information on successful modeling tested of stages with pressure ratio up to 2,8. The authors recon that lack of information on high pressure stages makes actual a study of rational choice of design parameters before high supersonic flow problems solving. The computer program of an engineering type was developed. Below is presented a sample of its application to study possible parameters of the impeller of the stage with pressure ratio 3,0. Influence of two main design parameters on expected efficiency, periphery blade speed and flow structure is demonstrated. The results had lead to choose a variant for further analysis and improvement by CFD methods.

Keywords: Supersonic stage, impeller, efficiency, flow rate coefficient, work coefficient, loss coefficient, oblique shock, direct shock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2619