Search results for: standard inverse definite minimum time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8660

Search results for: standard inverse definite minimum time

8450 Thermal and Starvation Effects on Lubricated Elliptical Contacts at High Rolling/Sliding Speeds

Authors: Vinod Kumar, Surjit Angra

Abstract:

The objective of this theoretical study is to develop simple design formulas for the prediction of minimum film thickness and maximum mean film temperature rise in lightly loaded high-speed rolling/sliding lubricated elliptical contacts incorporating starvation effect. Herein, the reported numerical analysis focuses on thermoelastohydrodynamically lubricated rolling/sliding elliptical contacts, considering the Newtonian rheology of lubricant for wide range of operating parameters, namely load characterized by Hertzian pressure (PH = 0.01 GPa to 0.10 GPa), rolling speed (>10 m/s), slip parameter (S varies up to 1.0), and ellipticity ratio (k = 1 to 5). Starvation is simulated by systematically reducing the inlet supply. This analysis reveals that influences of load, rolling speed, and level of starvation are significant on the minimum film thickness. However, the maximum mean film temperature rise is strongly influenced by slip in addition to load, rolling speed, and level of starvation. In the presence of starvation, reduction in minimum film thickness and increase in maximum mean film temperature are observed. Based on the results of this study, empirical relations are developed for the prediction of dimensionless minimum film thickness and dimensionless maximum mean film temperature rise at the contacts in terms of various operating parameters.

Keywords: Starvation, lubrication, elliptical contact, traction, minimum film thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
8449 DHT-LMS Algorithm for Sensorineural Loss Patients

Authors: Sunitha S. L., V. Udayashankara

Abstract:

Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.

Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
8448 Lane Detection Using Labeling Based RANSAC Algorithm

Authors: Yeongyu Choi, Ju H. Park, Ho-Youl Jung

Abstract:

In this paper, we propose labeling based RANSAC algorithm for lane detection. Advanced driver assistance systems (ADAS) have been widely researched to avoid unexpected accidents. Lane detection is a necessary system to assist keeping lane and lane departure prevention. The proposed vision based lane detection method applies Canny edge detection, inverse perspective mapping (IPM), K-means algorithm, mathematical morphology operations and 8 connected-component labeling. Next, random samples are selected from each labeling region for RANSAC. The sampling method selects the points of lane with a high probability. Finally, lane parameters of straight line or curve equations are estimated. Through the simulations tested on video recorded at daytime and nighttime, we show that the proposed method has better performance than the existing RANSAC algorithm in various environments.

Keywords: Canny edge detection, k-means algorithm, RANSAC, inverse perspective mapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1131
8447 Pavement Roughness Prediction Systems: A Bump Integrator Approach

Authors: Manish Pal, Rumi Sutradhar

Abstract:

Pavement surface unevenness plays a pivotal role on roughness index of road which affects on riding comfort ability. Comfort ability refers to the degree of protection offered to vehicle occupants from uneven elements in the road surface. So, it is preferable to have a lower roughness index value for a better riding quality of road users. Roughness is generally defined as an expression of irregularities in the pavement surface which can be measured using different equipments like MERLIN, Bump integrator, Profilometer etc. Among them Bump Integrator is quite simple and less time consuming in case of long road sections. A case study is conducted on low volume roads in West District in Tripura to determine roughness index (RI) using Bump Integrator at the standard speed of 32 km/h. But it becomes too tough to maintain the requisite standard speed throughout the road section. The speed of Bump Integrator (BI) has to lower or higher in some distinctive situations. So, it becomes necessary to convert these roughness index values of other speeds to the standard speed of 32 km/h. This paper highlights on that roughness index conversional model. Using SPSS (Statistical Package of Social Sciences) software a generalized equation is derived among the RI value at standard speed of 32 km/h and RI value at other speed conditions.

Keywords: Bump Integrator, Pavement Distresses, Roughness Index, SPSS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6632
8446 A Study on Removal Characteristics of (Mn2+) from Aqueous Solution by CNT

Authors: Nassereldeen A. Kabashi, Suleyman A. Muyibi. Mohammed E. Saeed., Farhana I. Yahya

Abstract:

It is important to remove manganese from water because of its effects on human and the environment. Human activities are one of the biggest contributors for excessive manganese concentration in the environment. The proposed method to remove manganese in aqueous solution by using adsorption as in carbon nanotubes (CNT) at different parameters: The parameters are CNT dosage, pH, agitation speed and contact time. Different pHs are pH 6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg, 6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min, 55 min, 87.5 min and 120 min while the agitation speeds are 100rpm, 150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for experiments are based on experimental design done by using Central Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and 2 replications. Based on the results, condition set at pH 7.0, agitation speed of 300 rpm, 7.5mg and contact time 55 minutes gives the highest removal with 75.5%. From ANOVA analysis in Design Expert 6.0, the residual concentration will be very much affected by pH and CNT dosage. Initial manganese concentration is 1.2mg/L while the lowest residual concentration achieved is 0.294mg/L, which almost satisfy DOE Malaysia Standard B requirement. Therefore, further experiments must be done to remove manganese from model water to the required standard (0.2 mg/L) with the initial concentration set to 0.294 mg/L.

Keywords: Adsorption, CNT, DOE, Manganese, Parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
8445 Real-time Performance Study of EPA Periodic Data Transmission

Authors: Liu Ning, Zhong Chongquan, Teng Hongfei

Abstract:

EPA (Ethernet for Plant Automation) resolves the nondeterministic problem of standard Ethernet and accomplishes real-time communication by means of micro-segment topology and deterministic scheduling mechanism. This paper studies the real-time performance of EPA periodic data transmission from theoretical and experimental perspective. By analyzing information transmission characteristics and EPA deterministic scheduling mechanism, 5 indicators including delivery time, time synchronization accuracy, data-sending time offset accuracy, utilization percentage of configured timeslice and non-RTE bandwidth that can be used to specify the real-time performance of EPA periodic data transmission are presented and investigated. On this basis, the test principles and test methods of the indicators are respectively studied and some formulas for real-time performance of EPA system are derived. Furthermore, an experiment platform is developed to test the indicators of EPA periodic data transmission in a micro-segment. According to the analysis and the experiment, the methods to improve the real-time performance of EPA periodic data transmission including optimizing network structure, studying self-adaptive adjustment method of timeslice and providing data-sending time offset accuracy for configuration are proposed.

Keywords: EPA system, Industrial Ethernet, Periodic data, Real-time performance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
8444 Performance Analysis of Space-Time Trellis Coded OFDM System

Authors: Yi Hong, Zhao Yang Dong

Abstract:

This paper presents the performance analysis of space-time trellis codes in orthogonal frequency division multiplexing systems (STTC-OFDMs) over quasi-static frequency selective fading channels. In particular, the effect of channel delay distributions on the code performance is discussed. For a STTCOFDM over multiple-tap channels, two extreme conditions that produce the largest minimum determinant are highlighted. The analysis also proves that the corresponding coding gain increases with the maximum tap delay. The performance of STTC-OFDM, under various channel conditions, is evaluated by simulation. It is shown that the simulation results agree with the performance analysis.

Keywords: Space-time trellis code, OFDM, delay profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
8443 A Reliable FPGA-based Real-time Optical-flow Estimation

Authors: M. M. Abutaleb, A. Hamdy, M. E. Abuelwafa, E. M. Saad

Abstract:

Optical flow is a research topic of interest for many years. It has, until recently, been largely inapplicable to real-time applications due to its computationally expensive nature. This paper presents a new reliable flow technique which is combined with a motion detection algorithm, from stationary camera image streams, to allow flow-based analyses of moving entities, such as rigidity, in real-time. The combination of the optical flow analysis with motion detection technique greatly reduces the expensive computation of flow vectors as compared with standard approaches, rendering the method to be applicable in real-time implementation. This paper describes also the hardware implementation of a proposed pipelined system to estimate the flow vectors from image sequences in real time. This design can process 768 x 576 images at a very high frame rate that reaches to 156 fps in a single low cost FPGA chip, which is adequate for most real-time vision applications.

Keywords: Optical flow, motion detection, real-time systems, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
8442 Adaptive Bidirectional Flow for Image Interpolation and Enhancement

Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang

Abstract:

Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.

Keywords: anisotropic diffusion, bidirectional flow, directional derivatives, edge enhancement, image interpolation, inverse flow, shock filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
8441 Reduced Dynamic Time Warping for Handwriting Recognition Based on Multidimensional Time Series of a Novel Pen Device

Authors: Muzaffar Bashir, Jürgen Kempf

Abstract:

The purpose of this paper is to present a Dynamic Time Warping technique which reduces significantly the data processing time and memory size of multi-dimensional time series sampled by the biometric smart pen device BiSP. The acquisition device is a novel ballpoint pen equipped with a diversity of sensors for monitoring the kinematics and dynamics of handwriting movement. The DTW algorithm has been applied for time series analysis of five different sensor channels providing pressure, acceleration and tilt data of the pen generated during handwriting on a paper pad. But the standard DTW has processing time and memory space problems which limit its practical use for online handwriting recognition. To face with this problem the DTW has been applied to the sum of the five sensor signals after an adequate down-sampling of the data. Preliminary results have shown that processing time and memory size could significantly be reduced without deterioration of performance in single character and word recognition. Further excellent accuracy in recognition was achieved which is mainly due to the reduced dynamic time warping RDTW technique and a novel pen device BiSP.

Keywords: Biometric character recognition, biometric person authentication, biometric smart pen BiSP, dynamic time warping DTW, online-handwriting recognition, multidimensional time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
8440 A Novel Multiplex Real-Time PCR Assay Using TaqMan MGB Probes for Rapid Detection of Trisomy 21

Authors: Mehrdad Hashemi, Mitra Behrooz Aghdam, Reza Mahdian, Ahmad Reza Kamyab

Abstract:

Cytogenetic analysis still remains the gold standard method for prenatal diagnosis of trisomy 21 (Down syndrome, DS). Nevertheless, the conventional cytogenetic analysis needs live cultured cells and is too time-consuming for clinical application. In contrast, molecular methods such as FISH, QF-PCR, MLPA and quantitative Real-time PCR are rapid assays with results available in 24h. In the present study, we have successfully used a novel MGB TaqMan probe-based real time PCR assay for rapid diagnosis of trisomy 21 status in Down syndrome samples. We have also compared the results of this molecular method with corresponding results obtained by the cytogenetic analysis. Blood samples obtained from DS patients (n=25) and normal controls (n=20) were tested by quantitative Real-time PCR in parallel to standard G-banding analysis. Genomic DNA was extracted from peripheral blood lymphocytes. A high precision TaqMan probe quantitative Real-time PCR assay was developed to determine the gene dosage of DSCAM (target gene on 21q22.2) relative to PMP22 (reference gene on 17p11.2). The DSCAM/PMP22 ratio was calculated according to the formula; ratio=2 -ΔΔCT. The quantitative Real-time PCR was able to distinguish between trisomy 21 samples and normal controls with the gene ratios of 1.49±0.13 and 1.03±0.04 respectively (p value <0.001). These results represent the presence of 3 copies of target gene in DS samples Vs 2 copies in normal controls. The results of quantitative Real-time PCR were in complete agreement with results of cytogenetic analysis. This study confirms previous reports regarding successful implementation of quantitative Real-time PCR for detection of trisomy 21. However, the assay has been improved by using MGB probes and more accurate data analysis. This assay, in particular, when performed in combination with another molecular assay such as QF-PCR or MLPA, can be used as a reliable technique for rapid prenatal diagnosis of trisomy 21.

Keywords: Trisomy 21, Real-time PCR, MGB-TaqMan Probes, Gene Dosage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2495
8439 A Dynamic Equation for Downscaling Surface Air Temperature

Authors: Ch. Surawut, D. Sukawat

Abstract:

In order to utilize results from global climate models, dynamical and statistical downscaling techniques have been developed. For dynamical downscaling, usually a limited area numerical model is used, with associated high computational cost. This research proposes dynamic equation for specific space-time regional climate downscaling from the Educational Global Climate Model (EdGCM) for Southeast Asia. The equation is for surface air temperature. This equation provides downscaling values of surface air temperature at any specific location and time without running a regional climate model. In the proposed equations, surface air temperature is approximated from ground temperature, sensible heat flux and 2m wind speed. Results from the application of the equation show that the errors from the proposed equations are less than the errors for direct interpolation from EdGCM.

Keywords: Dynamic Equation, Downscaling, Inverse distance weight interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
8438 Feature Preserving Image Interpolation and Enhancement Using Adaptive Bidirectional Flow

Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang

Abstract:

Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.

Keywords: anisotropic diffusion, bidirectional flow, directionalderivatives, edge enhancement, image interpolation, inverse flow, shock filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1455
8437 Application of Data Mining Tools to Predicate Completion Time of a Project

Authors: Seyed Hossein Iranmanesh, Zahra Mokhtari

Abstract:

Estimation time and cost of work completion in a project and follow up them during execution are contributors to success or fail of a project, and is very important for project management team. Delivering on time and within budgeted cost needs to well managing and controlling the projects. To dealing with complex task of controlling and modifying the baseline project schedule during execution, earned value management systems have been set up and widely used to measure and communicate the real physical progress of a project. But it often fails to predict the total duration of the project. In this paper data mining techniques is used predicting the total project duration in term of Time Estimate At Completion-EAC (t). For this purpose, we have used a project with 90 activities, it has updated day by day. Then, it is used regular indexes in literature and applied Earned Duration Method to calculate time estimate at completion and set these as input data for prediction and specifying the major parameters among them using Clem software. By using data mining, the effective parameters on EAC and the relationship between them could be extracted and it is very useful to manage a project with minimum delay risks. As we state, this could be a simple, safe and applicable method in prediction the completion time of a project during execution.

Keywords: Data Mining Techniques, Earned Duration Method, Earned Value, Estimate At Completion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
8436 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927
8435 Minimization Problems for Generalized Reflexive and Generalized Anti-Reflexive Matrices

Authors: Yongxin Yuan

Abstract:

Let R ∈ Cm×m and S ∈ Cn×n be nontrivial unitary involutions, i.e., RH = R = R−1 = ±Im and SH = S = S−1 = ±In. A ∈ Cm×n is said to be a generalized reflexive (anti-reflexive) matrix if RAS = A (RAS = −A). Let ρ be the set of m × n generalized reflexive (anti-reflexive) matrices. Given X ∈ Cn×p, Z ∈ Cm×p, Y ∈ Cm×q and W ∈ Cn×q, we characterize the matrices A in ρ that minimize AX−Z2+Y HA−WH2, and, given an arbitrary A˜ ∈ Cm×n, we find a unique matrix among the minimizers of AX − Z2 + Y HA − WH2 in ρ that minimizes A − A˜. We also obtain sufficient and necessary conditions for existence of A ∈ ρ such that AX = Z, Y HA = WH, and characterize the set of all such matrices A if the conditions are satisfied. These results are applied to solve a class of left and right inverse eigenproblems for generalized reflexive (anti-reflexive) matrices.

Keywords: approximation, generalized reflexive matrix, generalized anti-reflexive matrix, inverse eigenvalue problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061
8434 Automated Segmentation of ECG Signals using Piecewise Derivative Dynamic Time Warping

Authors: Ali Zifan, Mohammad Hassan Moradi, Sohrab Saberi, Farzad Towhidkhah

Abstract:

Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG-s. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna-s two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna-s method.

Keywords: Adaptive Piecewise Constant Approximation, Dynamic programming, ECG segmentation, Piecewise DerivativeDynamic Time Warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
8433 Interaction of Building Stones with Inorganic Water-Soluble Salts

Authors: Z. Pavlík, J. Žumár, M. Pavlíková, R. Černý

Abstract:

Interaction of inorganic water-soluble salts and building stones is studied in the paper. Two types of sandstone and one type of spongillite as representatives of materials used in historical masonry are subjected to experimental testing. Within the performed experiments, measurement of moisture and chloride concentration profiles is done in order to get input data for computational inverse analysis. Using the inverse analysis, moisture diffusivity and chloride diffusion coefficient of investigated materials are accessed. Additionally, the effect of salt presence on water vapor storage is investigated using dynamic vapor sorption device. The obtained data represents valuable information for restoration of historical masonry and give evidence on the performance of studied stones in contact with water soluble salts.

Keywords: Moisture and chloride transport, sandstone, spongillite, moisture diffusivity, chloride diffusion coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
8432 Segmenting Ultrasound B-Mode Images Using RiIG Distributions and Stochastic Optimization

Authors: N. Mpofu, M. Sears

Abstract:

In this paper, we propose a novel algorithm for delineating the endocardial wall from a human heart ultrasound scan. We assume that the gray levels in the ultrasound images are independent and identically distributed random variables with different Rician Inverse Gaussian (RiIG) distributions. Both synthetic and real clinical data will be used for testing the algorithm. Algorithm performance will be evaluated using the expert radiologist evaluation of a soft copy of an ultrasound scan during the scanning process and secondly, doctor’s conclusion after going through a printed copy of the same scan. Successful implementation of this algorithm should make it possible to differentiate normal from abnormal soft tissue and help disease identification, what stage the disease is in and how best to treat the patient. We hope that an automated system that uses this algorithm will be useful in public hospitals especially in Third World countries where problems such as shortage of skilled radiologists and shortage of ultrasound machines are common. These public hospitals are usually the first and last stop for most patients in these countries.

Keywords: Endorcardial Wall, Rician Inverse Distributions, Segmentation, Ultrasound Images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
8431 A Low-cost Reconfigurable Architecture for AES Algorithm

Authors: Yibo Fan, Takeshi Ikenaga, Yukiyasu Tsunoo, Satoshi Goto

Abstract:

This paper proposes a low-cost reconfigurable architecture for AES algorithm. The proposed architecture separates SubBytes and MixColumns into two parallel data path, and supports different bit-width operation for this two data path. As a result, different number of S-box can be supported in this architecture. The throughput and power consumption can be adjusted by changing the number of S-box running in this design. Using the TSMC 0.18μm CMOS standard cell library, a very low-cost implementation of 7K Gates is obtained under 182MHz frequency. The maximum throughput is 360Mbps while using 4 S-Box simultaneously, and the minimum throughput is 114Mbps while only using 1 S-Box

Keywords: AES, Reconfigurable architecture, low cost

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024
8430 Automated ECG Segmentation Using Piecewise Derivative Dynamic Time Warping

Authors: Ali Zifan, Sohrab Saberi, Mohammad Hassan Moradi, Farzad Towhidkhah

Abstract:

Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.

Keywords: Adaptive Piecewise Constant Approximation, Dynamic programming, ECG segmentation, Piecewise Derivative Dynamic Time Warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
8429 Numerical Simulation of Minimum Distance Jet Impingement Heat Transfer

Authors: Aman Agarwal, Georg Klepp

Abstract:

Impinging jets are used in various industrial areas as a cooling and drying technique. The current research is concerned with the means of improving the heat transfer for configurations with a minimum distance of the nozzle to the impingement surface. The impingement heat transfer is described using numerical methods over a wide range of parameters for an array of planar jets. These parameters include varying jet flow speed, width of nozzle, distance of nozzle, angle of the jet flow, velocity and geometry of the impingement surface. Normal pressure and shear stress are computed as additional parameters. Using dimensionless characteristic numbers the parameters and the results are correlated to gain generalized equations. The results demonstrate the effect of the investigated parameters on the flow.

Keywords: Heat Transfer Coefficient, Minimum distance jet impingement, Numerical simulation, Dimensionless coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2317
8428 Determining Full Stage Creep Properties from Miniature Specimen Creep Test

Authors: W. Sun, W. Wen, J. Lu, A. A. Becker

Abstract:

In this work, methods for determining creep properties which can be used to represent the full life until failure from miniature specimen creep tests based on analytical solutions are presented. Examples used to demonstrate the application of the methods include a miniature rectangular thin beam specimen creep test under three-point bending and a miniature two-material tensile specimen creep test subjected to a steady load. Mathematical expressions for deflection and creep strain rate of the two specimens were presented for the Kachanov-Rabotnov creep damage model. On this basis, an inverse procedure was developed which has potential applications for deriving the full life creep damage constitutive properties from a very small volume of material, in particular, for various microstructure constitutive  regions, e.g. within heat-affected zones of power plant pipe weldments. Further work on validation and improvement of the method is addressed.

Keywords: Creep damage property, analytical solutions, inverse approach, miniature specimen test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
8427 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis

Authors: Hyun-Ho Lee, Kee-Won Kim

Abstract:

The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.

Keywords: Finite field, Montgomery multiplication, systolic array, cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
8426 Parameters Optimization of the Laminated Composite Plate for Sound Transmission Problem

Authors: Yu T. Tsai, Jin H. Huang

Abstract:

In this paper, the specific sound Transmission Loss (TL) of the Laminated Composite Plate (LCP) with different material properties in each layer is investigated. The numerical method to obtain the TL of the LCP is proposed by using elastic plate theory. The transfer matrix approach is novelty presented for computational efficiency in solving the numerous layers of dynamic stiffness matrix (D-matrix) of the LCP. Besides the numerical simulations for calculating the TL of the LCP, the material properties inverse method is presented for the design of a laminated composite plate analogous to a metallic plate with a specified TL. As a result, it demonstrates that the proposed computational algorithm exhibits high efficiency with a small number of iterations for achieving the goal. This method can be effectively employed to design and develop tailor-made materials for various applications.

Keywords: Sound transmission loss, laminated composite plate, transfer matrix approach, inverse problem, elastic plate theory, material properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
8425 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.

Keywords: Fuzzy time series, neural network, forecasting error, average error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
8424 Finding Pareto Optimal Front for the Multi-Mode Time, Cost Quality Trade-off in Project Scheduling

Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo

Abstract:

Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.

Keywords: FastPGA, Multi-Execution Activity Mode, ParetoOptimality, Project Scheduling, Time-Cost-Quality Trade-Off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640
8423 Evolutionary Program Based Approach for Manipulator Grasping Color Objects

Authors: Y. Harold Robinson, M. Rajaram, Honey Raju

Abstract:

Image segmentation and color identification is an important process used in various emerging fields like intelligent robotics. A method is proposed for the manipulator to grasp and place the color object into correct location. The existing methods such as PSO, has problems like accelerating the convergence speed and converging to a local minimum leading to sub optimal performance. To improve the performance, we are using watershed algorithm and for color identification, we are using EPSO. EPSO method is used to reduce the probability of being stuck in the local minimum. The proposed method offers the particles a more powerful global exploration capability. EPSO methods can determine the particles stuck in the local minimum and can also enhance learning speed as the particle movement will be faster.

Keywords: Color information, EPSO, hue, saturation, value (HSV), image segmentation, particle swarm optimization (PSO). Active Contour, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
8422 The Survey and the Comparison of Maximum Likelihood, Mahalanobis Distance and Minimum Distance Methods in Preparing Landuse Map in the Western Part of Isfahan Province

Authors: Ali Gholami, M.Esfadiari, M.H.Masihabadi

Abstract:

In this research three methods of Maximum Likelihood, Mahalanobis Distance and Minimum Distance were analyzed in the Western part of Isfahan province in the Iran country. For this purpose, the IRS satellite images and various land preparation uses in region including rangelands, irrigation farming, dry farming, gardens and urban areas were separated and identified. In these methods, matrix error and Kappa index were calculated and accuracy of each method, based on percentages: 53.13, 56.64 and 48.44, were obtained respectively. Considering the low accuracy of these methods to separate land uses due to spread of the land uses, it-s suggested the visual interpretation of the map, to preparing the land use map in this region. The map prepared by visual interpretation is in high accuracy if it will be accompany with the visit of the region.

Keywords: Aghche Region, land use map, MaximumLikelihood, Mahalanobis Distance and Minimum Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
8421 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard

Authors: Omid F. Valilai, Mahmoud Houshmand

Abstract:

Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.

Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180