Search results for: random dither quantization.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 702

Search results for: random dither quantization.

492 Reliability Modeling and Data Analysis of Vacuum Circuit Breaker Subject to Random Shocks

Authors: Rafik Medjoudj, Rabah Medjoudj, D. Aissani

Abstract:

The electrical substation components are often subject to degradation due to over-voltage or over-current, caused by a short circuit or a lightning. A particular interest is given to the circuit breaker, regarding the importance of its function and its dangerous failure. This component degrades gradually due to the use, and it is also subject to the shock process resulted from the stress of isolating the fault when a short circuit occurs in the system. In this paper, based on failure mechanisms developments, the wear out of the circuit breaker contacts is modeled. The aim of this work is to evaluate its reliability and consequently its residual lifetime. The shock process is based on two random variables such as: the arrival of shocks and their magnitudes. The arrival of shocks was modeled using homogeneous Poisson process (HPP). By simulation, the dates of short-circuit arrivals were generated accompanied with their magnitudes. The same principle of simulation is applied to the amount of cumulative wear out contacts. The objective reached is to find the formulation of the wear function depending on the number of solicitations of the circuit breaker.

Keywords: reliability, short-circuit, models of shocks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
491 Towards Integrating Statistical Color Features for Human Skin Detection

Authors: Mohd Zamri Osman, Mohd Aizaini Maarof, Mohd Foad Rohani

Abstract:

Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969.

Keywords: Color space, neural network, random forest, skin detection, statistical feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
490 Using Analytical Hierarchy Process and TOPSIS Approaches in Designing a Finite Element Analysis Automation Program

Authors: Ming Wen, Nasim Nezamoddini

Abstract:

Sophisticated numerical simulations like finite element analysis (FEA) involve a complicated process from model setup to post-processing tasks that require replication of time-consuming steps. Utilizing FEA automation program simplifies the complexity of the involved steps while minimizing human errors in analysis set up, calculations, and results processing. One of the main challenges in designing FEA automation programs is to identify user requirements and link them to possible design alternatives. This paper presents a decision-making framework to design a Python based FEA automation program for modal analysis, frequency response analysis, and random vibration fatigue (RVF) analysis procedures. Analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) are applied to evaluate design alternatives considering the feedback received from experts and program users.

Keywords: FEA, random vibration fatigue, process automation, AHP, TOPSIS, multiple-criteria decision-making, MCDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 432
489 Measurement of the Bipolarization Events

Authors: Stefan V. Stefanescu

Abstract:

We intend to point out the differences which exist between the classical Gini concentration coefficient and a proposed bipolarization index defined for an arbitrary random variable which have a finite support. In fact Gini's index measures only the "poverty degree" for the individuals from a given population taking into consideration their wages. The Gini coefficient is not so sensitive to the significant income variations in the "rich people class" . In practice there are multiple interdependent relations between the pauperization and the socio-economical polarization phenomena. The presence of a strong pauperization aspect inside the population induces often a polarization effect in this society. But the pauperization and the polarization phenomena are not identical. For this reason it isn't always adequate to use a Gini type coefficient, based on the Lorenz order, to estimate the bipolarization level of the individuals from the studied population. The present paper emphasizes these ideas by considering two families of random variables which have a linear or a triangular type distributions. In addition, the continuous variation, depending on the parameter "time" of the chosen distributions, could simulate a real dynamical evolution of the population.

Keywords: Bipolarization phenomenon, Gini coefficient, income distribution, poverty measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
488 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
487 Quality Estimation of Video Transmitted overan Additive WGN Channel based on Digital Watermarking and Wavelet Transform

Authors: Mohamed S. El-Mahallawy, Attalah Hashad, Hazem Hassan Ali, Heba Sami Zaky

Abstract:

This paper presents an evaluation for a wavelet-based digital watermarking technique used in estimating the quality of video sequences transmitted over Additive White Gaussian Noise (AWGN) channel in terms of a classical objective metric, such as Peak Signal-to-Noise Ratio (PSNR) without the need of the original video. In this method, a watermark is embedded into the Discrete Wavelet Transform (DWT) domain of the original video frames using a quantization method. The degradation of the extracted watermark can be used to estimate the video quality in terms of PSNR with good accuracy. We calculated PSNR for video frames contaminated with AWGN and compared the values with those estimated using the Watermarking-DWT based approach. It is found that the calculated and estimated quality measures of the video frames are highly correlated, suggesting that this method can provide a good quality measure for video frames transmitted over AWGN channel without the need of the original video.

Keywords: AWGN, DWT, PSNR, Watermarking, VideoQuality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
486 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference  spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signalis is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.

Keywords: GLRT, Neumann-Pearson’s criterion, test-statistics, degradation, spatial processing, multielement antenna array

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
485 Assessment of Carbon Dioxide Separation by Amine Solutions Using Electrolyte Non-Random Two-Liquid and Peng-Robinson Models: Carbon Dioxide Absorption Efficiency

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

A high pressure carbon dioxide (CO2) absorption from a specific gas in a conventional column has been evaluated by the Aspen HYSYS simulator using a wide range of single absorbents and blended solutions to estimate the outlet CO2 concentration, absorption efficiency and CO2 loading to choose the most proper solution in terms of CO2 capture for environmental concerns. The property package (Acid Gas-Chemical Solvent) which is compatible with all applied solutions for the simulation in this study, estimates the properties based on an electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for the vapor and liquid hydrocarbon phases. Among all the investigated single amines as well as blended solutions, piperazine (PZ) and the mixture of piperazine and monoethanolamine (MEA) have been found as the most effective absorbents respectively for CO2 absorption with high reactivity based on the simulated operational conditions.

Keywords: Absorption, amine solutions, Aspen HYSYS, carbon dioxide, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 483
484 The Tag Authentication Scheme using Self-Shrinking Generator on RFID System

Authors: HangRok Lee, DoWon Hong

Abstract:

Since communications between tag and reader in RFID system are by radio, anyone can access the tag and obtain its any information. And a tag always replies with the same ID so that it is hard to distinguish between a real and a fake tag. Thus, there are many security problems in today-s RFID System. Firstly, unauthorized reader can easily read the ID information of any Tag. Secondly, Adversary can easily cheat the legitimate reader using the collected Tag ID information, such as the any legitimate Tag. These security problems can be typically solved by encryption of messages transmitted between Tag and Reader and by authentication for Tag. In this paper, to solve these security problems on RFID system, we propose the Tag Authentication Scheme based on self shrinking generator (SSG). SSG Algorithm using in our scheme is proposed by W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is organized that only one LFSR and selection logic in order to generate random stream. Thus it is optimized to implement the hardware logic on devices with extremely limited resource, and the output generating from SSG at each time do role as random stream so that it is allow our to design the light-weight authentication scheme with security against some network attacks. Therefore, we propose the novel tag authentication scheme which use SSG to encrypt the Tag-ID transmitted from tag to reader and achieve authentication of tag.

Keywords: RFID system, RFID security, self shrinkinggeneratior, authentication, protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
483 A General Segmentation Scheme for Contouring Kidney Region in Ultrasound Kidney Images using Improved Higher Order Spline Interpolation

Authors: K. Bommanna Raja, M.Madheswaran, K.Thyagarajah

Abstract:

A higher order spline interpolated contour obtained with up-sampling of homogenously distributed coordinates for segmentation of kidney region in different classes of ultrasound kidney images has been developed and presented in this paper. The performance of the proposed method is measured and compared with modified snake model contour, Markov random field contour and expert outlined contour. The validation of the method is made in correspondence with expert outlined contour using maximum coordinate distance, Hausdorff distance and mean radial distance metrics. The results obtained reveal that proposed scheme provides optimum contour that agrees well with expert outlined contour. Moreover this technique helps to preserve the pixels-of-interest which in specific defines the functional characteristic of kidney. This explores various possibilities in implementing computer-aided diagnosis system exclusively for US kidney images.

Keywords: Ultrasound Kidney Image – Kidney Segmentation –Active Contour – Markov Random Field – Higher Order SplineInterpolation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
482 Syntactic Recognition of Distorted Patterns

Authors: Marek Skomorowski

Abstract:

In syntactic pattern recognition a pattern can be represented by a graph. Given an unknown pattern represented by a graph g, the problem of recognition is to determine if the graph g belongs to a language L(G) generated by a graph grammar G. The so-called IE graphs have been defined in [1] for a description of patterns. The IE graphs are generated by so-called ETPL(k) graph grammars defined in [1]. An efficient, parsing algorithm for ETPL(k) graph grammars for syntactic recognition of patterns represented by IE graphs has been presented in [1]. In practice, structural descriptions may contain pattern distortions, so that the assignment of a graph g, representing an unknown pattern, to a graph language L(G) generated by an ETPL(k) graph grammar G is rejected by the ETPL(k) type parsing. Therefore, there is a need for constructing effective parsing algorithms for recognition of distorted patterns. The purpose of this paper is to present a new approach to syntactic recognition of distorted patterns. To take into account all variations of a distorted pattern under study, a probabilistic description of the pattern is needed. A random IE graph approach is proposed here for such a description ([2]).

Keywords: Syntactic pattern recognition, Distorted patterns, Random graphs, Graph grammars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
481 Reliability Analysis for Cyclic Fatigue Life Prediction in Railroad Bolt Hole

Authors: Hasan Keshavarzian, Tayebeh Nesari

Abstract:

Bolted rail joint is one of the most vulnerable areas in railway track. A comprehensive approach was developed for studying the reliability of fatigue crack initiation of railroad bolt hole under random axle loads and random material properties. The operation condition was also considered as stochastic variables. In order to obtain the comprehensive probability model of fatigue crack initiation life prediction in railroad bolt hole, we used FEM, response surface method (RSM), and reliability analysis. Combined energy-density based and critical plane based fatigue concept is used for the fatigue crack prediction. The dynamic loads were calculated according to the axle load, speed, and track properties. The results show that axle load is most sensitive parameter compared to Poisson’s ratio in fatigue crack initiation life. Also, the reliability index decreases slowly due to high cycle fatigue regime in this area.

Keywords: Rail-wheel tribology, rolling contact mechanic, finite element modeling, reliability analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
480 Influence of the Paint Coating Thickness in Digital Image Correlation Experiments

Authors: Jesús A. Pérez, Sam Coppieters, Dimitri Debruyne

Abstract:

In the past decade, the use of digital image correlation (DIC) techniques has increased significantly in the area of experimental mechanics, especially for materials behavior characterization. This non-contact tool enables full field displacement and strain measurements over a complete region of interest. The DIC algorithm requires a random contrast pattern on the surface of the specimen in order to perform properly. To create this pattern, the specimen is usually first coated using a white matt paint. Next, a black random speckle pattern is applied using any suitable method. If the applied paint coating is too thick, its top surface may not be able to exactly follow the deformation of the specimen, and consequently, the strain measurement might be underestimated. In the present article, a study of the influence of the paint thickness on the strain underestimation is performed for different strain levels. The results are then compared to typical paint coating thicknesses applied by experienced DIC users. A slight strain underestimation was observed for paint coatings thicker than about 30μm. On the other hand, this value was found to be uncommonly high compared to coating thicknesses applied by DIC users.

Keywords: Digital Image Correlation, paint coating thickness, strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2259
479 A Study of Panel Logit Model and Adaptive Neuro-Fuzzy Inference System in the Prediction of Financial Distress Periods

Authors: Ε. Giovanis

Abstract:

The purpose of this paper is to present two different approaches of financial distress pre-warning models appropriate for risk supervisors, investors and policy makers. We examine a sample of the financial institutions and electronic companies of Taiwan Security Exchange (TSE) market from 2002 through 2008. We present a binary logistic regression with paned data analysis. With the pooled binary logistic regression we build a model including more variables in the regression than with random effects, while the in-sample and out-sample forecasting performance is higher in random effects estimation than in pooled regression. On the other hand we estimate an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell (Gbell) functions and we find that ANFIS outperforms significant Logit regressions in both in-sample and out-of-sample periods, indicating that ANFIS is a more appropriate tool for financial risk managers and for the economic policy makers in central banks and national statistical services.

Keywords: ANFIS, Binary logistic regression, Financialdistress, Panel data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
478 Cyber Security Enhancement via Software-Defined Pseudo-Random Private IP Address Hopping

Authors: Andre Slonopas, Warren Thompson, Zona Kostic

Abstract:

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicates via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Keywords: Moving Target Defense, cybersecurity, network security, hopping randomization, software defined network, network security theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507
477 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division

Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.

Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
476 On Solving Single-Period Inventory Model under Hybrid Uncertainty

Authors: Madhukar Nagare, Pankaj Dutta

Abstract:

Inventory decisional environment of short life-cycle products is full of uncertainties arising from randomness and fuzziness of input parameters like customer demand requiring modeling under hybrid uncertainty. Prior inventory models incorporating fuzzy demand have unfortunately ignored stochastic variation of demand. This paper determines an unambiguous optimal order quantity from a set of n fuzzy observations in a newsvendor inventory setting in presence of fuzzy random variable demand capturing both fuzzy perception and randomness of customer demand. The stress of this paper is in providing solution procedure that attains optimality in two steps with demand information availability in linguistic phrases leading to fuzziness along with stochastic variation. The first step of solution procedure identifies and prefers one best fuzzy opinion out of all expert opinions and the second step determines optimal order quantity from the selected event that maximizes profit. The model and solution procedure is illustrated with a numerical example.

Keywords: Fuzzy expected value, Fuzzy random demand, Hybrid uncertainty, Optimal order quantity, Single-period inventory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
475 Genetic-Based Multi Resolution Noisy Color Image Segmentation

Authors: Raghad Jawad Ahmed

Abstract:

Segmentation of a color image composed of different kinds of regions can be a hard problem, namely to compute for an exact texture fields. The decision of the optimum number of segmentation areas in an image when it contains similar and/or un stationary texture fields. A novel neighborhood-based segmentation approach is proposed. A genetic algorithm is used in the proposed segment-pass optimization process. In this pass, an energy function, which is defined based on Markov Random Fields, is minimized. In this paper we use an adaptive threshold estimation method for image thresholding in the wavelet domain based on the generalized Gaussian distribution (GGD) modeling of sub band coefficients. This method called Normal Shrink is computationally more efficient and adaptive because the parameters required for estimating the threshold depend on sub band data energy that used in the pre-stage of segmentation. A quad tree is employed to implement the multi resolution framework, which enables the use of different strategies at different resolution levels, and hence, the computation can be accelerated. The experimental results using the proposed segmentation approach are very encouraging.

Keywords: Color image segmentation, Genetic algorithm, Markov random field, Scale space filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
474 Improved Feature Processing for Iris Biometric Authentication System

Authors: Somnath Dey, Debasis Samanta

Abstract:

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097
473 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
472 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance

Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian

Abstract:

Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.

Keywords: Identification, Hammerstein-Wiener, optimization, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
471 Secure Block-Based Video Authentication with Localization and Self-Recovery

Authors: Ammar M. Hassan, Ayoub Al-Hamadi, Yassin M. Y. Hasan, Mohamed A. A. Wahab, Bernd Michaelis

Abstract:

Because of the great advance in multimedia technology, digital multimedia is vulnerable to malicious manipulations. In this paper, a public key self-recovery block-based video authentication technique is proposed which can not only precisely localize the alteration detection but also recover the missing data with high reliability. In the proposed block-based technique, multiple description coding MDC is used to generate two codes (two descriptions) for each block. Although one block code (one description) is enough to rebuild the altered block, the altered block is rebuilt with better quality by the two block descriptions. So using MDC increases the ratability of recovering data. A block signature is computed using a cryptographic hash function and a doubly linked chain is utilized to embed the block signature copies and the block descriptions into the LSBs of distant blocks and the block itself. The doubly linked chain scheme gives the proposed technique the capability to thwart vector quantization attacks. In our proposed technique , anyone can check the authenticity of a given video using the public key. The experimental results show that the proposed technique is reliable for detecting, localizing and recovering the alterations.

Keywords: Authentication, hash function, multiple descriptioncoding, public key encryption, watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
470 A Review on Thermal Conductivity of Bio-Based Carbon Nanotubes

Authors: Gloria A. Adewumi, Andrew C. Eloka-Eboka, Freddie L. Inambao

Abstract:

Bio-based carbon nanotubes (CNTs) have received considerable research attention due to their comparative advantages of high level stability, simplistic use, low toxicity and overall environmental friendliness. New potentials for improvement in heat transfer applications are presented due to their high aspect ratio, high thermal conductivity and special surface area. Phonons have been identified as being responsible for thermal conductivities in carbon nanotubes. Therefore, understanding the mechanism of heat conduction in CNTs involves investigating the difference between the varieties of phonon modes and knowing the kinds of phonon modes that play the dominant role. In this review, a reference to a different number of studies is made and in addition, the role of phonon relaxation rate mainly controlled by boundary scattering and three-phonon Umklapp scattering process was investigated. Results show that the phonon modes are sensitive to a number of nanotube conditions such as: diameter, length, temperature, defects and axial strain. At a low temperature (<100K) the thermal conductivity increases with increasing temperature. A small nanotube size causes phonon quantization which is evident in the thermal conductivity at low temperatures.

Keywords: Carbon nanotubes, phonons, thermal conductivity, umklapp process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
469 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: Conduction, inverse problems, conjugated gradient method, laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 787
468 Comprehensive Study on the Linear Hydrodynamic Analysis of a Truss Spar in Random Waves

Authors: Roozbeh Mansouri, Hassan Hadidi

Abstract:

Truss spars are used for oil exploitation in deep and ultra-deep water if storage crude oil is not needed. The linear hydrodynamic analysis of truss spar in random sea wave load is necessary for determining the behaviour of truss spar. This understanding is not only important for design of the mooring lines, but also for optimising the truss spar design. In this paper linear hydrodynamic analysis of truss spar is carried out in frequency domain. The hydrodynamic forces are calculated using the modified Morison equation and diffraction theory. Added mass and drag coefficients of truss section computed by transmission matrix and normal acceleration and velocity component acting on each element and for hull section computed by strip theory. The stiffness properties of the truss spar can be separated into two components; hydrostatic stiffness and mooring line stiffness. Then, platform response amplitudes obtained by solved the equation of motion. This equation is non-linear due to viscous damping term therefore linearised by iteration method [1]. Finally computed RAOs and significant response amplitude and results are compared with experimental data.

Keywords: Truss Spar, Hydrodynamic analysis, Wave spectrum, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2373
467 Performance Analysis of an Island Power System Including Wind Turbines Operating under Random Wind Speed

Authors: Meng-Jen Chen, Yu-Chi Wu, Guo-Tsai Liu, Sen-Feng Lin

Abstract:

With continuous rise of oil price, how to develop alternative energy source has become a hot topic around the world. This study discussed the dynamic characteristics of an island power system operating under random wind speed lower than nominal wind speeds of wind turbines. The system primarily consists of three diesel engine power generation systems, three constant-speed variable-pitch wind turbines, a small hydraulic induction generation system, and lumped static loads. Detailed models based on Matlab/Simulink were developed to cater for the dynamic behavior of the system. The results suggested this island power system can operate stably in this operational mode. This study can serve as an important reference for planning, operation, and further expansion of island power systems.

Keywords: Diesel engine power generation system, constant-speed variable-pitch wind turbine, small hydraulic induction generation system, penetration, Matlab/Simulink, SimPowerSystems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2208
466 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.

Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120
465 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
464 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky

Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio

Abstract:

This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.

Keywords: Contour orientation histogram, meteors, night sky, RSC neural classifier, stars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 329
463 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing

Authors: Carolina Gouveia, José Vieira, Pedro Pinho

Abstract:

The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.

Keywords: Bio-signals, DC Component, Doppler Effect, ellipse fitting, radar, SDR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748