Search results for: Block method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8398

Search results for: Block method

6958 Multiscale Blind Image Restoration with a New Method

Authors: Alireza Mallahzadeh, Hamid Dehghani, Iman Elyasi

Abstract:

A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.

Keywords: Multiscale blind image restoration, image denoising, blur estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707
6957 On Identity Disclosure Risk Measurement for Shared Microdata

Authors: M. N. Huda, S. Yamada, N. Sonehara

Abstract:

Probability-based identity disclosure risk measurement may give the same overall risk for different anonymization strategy of the same dataset. Some entities in the anonymous dataset may have higher identification risks than the others. Individuals are more concerned about higher risks than the average and are more interested to know if they have a possibility of being under higher risk. A notation of overall risk in the above measurement method doesn-t indicate whether some of the involved entities have higher identity disclosure risk than the others. In this paper, we have introduced an identity disclosure risk measurement method that not only implies overall risk, but also indicates whether some of the members have higher risk than the others. The proposed method quantifies the overall risk based on the individual risk values, the percentage of the records that have a risk value higher than the average and how larger the higher risk values are compared to the average. We have analyzed the disclosure risks for different disclosure control techniques applied to original microdata and present the results.

Keywords: Anonymization, microdata, disclosure risk, privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
6956 Effects of Roughness Elements on Heat Transfer during Natural Convection

Authors: M. Yousaf, S. Usman

Abstract:

The present study focused on the investigation of the effects of roughness elements on heat transfer during natural convection in a rectangular cavity using numerical technique. Roughness elements were introduced on the bottom hot wall with a normalized amplitude (A*/H) of 0.1. Thermal and hydrodynamic behaviors were studied using computational method based on Lattice Boltzmann method (LBM). Numerical studies were performed for a laminar flow in the range of Rayleigh number (Ra) from 103 to 106 for a rectangular cavity of aspect ratio (L/H) 2.0 with a fluid of Prandtl number (Pr) 1.0. The presence of the sinusoidal roughness elements caused a minimum to maximum decrease in the heat transfer as 7% to 17% respectively compared to smooth enclosure. The results are presented for mean Nusselt number (Nu), isotherms and streamlines.

Keywords: Natural convection, Rayleigh number, surface roughness, Nusselt number, Lattice Boltzmann Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
6955 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
6954 A Novel Low Power, High Speed 14 Transistor CMOS Full Adder Cell with 50% Improvement in Threshold Loss Problem

Authors: T. Vigneswaran, B. Mukundhan, P. Subbarami Reddy

Abstract:

Full adders are important components in applications such as digital signal processors (DSP) architectures and microprocessors. In addition to its main task, which is adding two numbers, it participates in many other useful operations such as subtraction, multiplication, division,, address calculation,..etc. In most of these systems the adder lies in the critical path that determines the overall speed of the system. So enhancing the performance of the 1-bit full adder cell (the building block of the adder) is a significant goal.Demands for the low power VLSI have been pushing the development of aggressive design methodologies to reduce the power consumption drastically. To meet the growing demand, we propose a new low power adder cell by sacrificing the MOS Transistor count that reduces the serious threshold loss problem, considerably increases the speed and decreases the power when compared to the static energy recovery full (SERF) adder. So a new improved 14T CMOS l-bit full adder cell is presented in this paper. Results show 50% improvement in threshold loss problem, 45% improvement in speed and considerable power consumption over the SERF adder and other different types of adders with comparable performance.

Keywords: Arithmetic circuit, full adder, multiplier, low power, very Large-scale integration (VLSI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3942
6953 The Effect of Geometry Dimensions on the Earthquake Response of the Finite Element Method

Authors: Morteza Jiryaei Sharahi

Abstract:

In this paper, the effect of width and height of the model on the earthquake response in the finite element method is discussed. For this purpose an earth dam as a soil structure under earthquake has been considered. Various dam-foundation models are analyzed by Plaxis, a finite element package for solving geotechnical problems. The results indicate considerable differences in the seismic responses.

Keywords: Geometry dimensions, finite element, earthquake

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
6952 Copy-Move Image Forgery Detection in Virtual Electrostatic Field

Authors: Michael Zimba, Darlison Nyirenda

Abstract:

A novel copy-move image forgery, CMIF, detection method is proposed. The proposed method presents a new approach which relies on electrostatic field theory, EFT. Solely for the purpose of reducing the dimension of a suspicious image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of the suspicious image and extracts only the approximation subband. The extracted subband is then bijectively mapped onto a virtual electrostatic field where concepts of EFT are utilized to extract robust features. The extracted features are invariant to additive noise, JPEG compression, and affine transformation. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. SATS is a better option than the common shift vector method because SATS is insensitive to affine transformation. Consequently, the proposed CMIF algorithm is not only fast but also more robust to attacks compared to the existing related CMIF algorithms. The experimental results show high detection rates, as high as 100% in some cases.

Keywords: Affine transformation, Radix sort, SATS, Virtual electrostatic field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
6951 Co-tier and Co-channel Interference Avoidance Algorithm for Femtocell Networks

Authors: S. Padmapriya, M. Tamilarasi

Abstract:

Femtocells are regarded as a milestone for next generation cellular networks. As femtocells are deployed in an unplanned manner, there is a chance of assigning same resource to neighboring femtocells. This scenario may induce co-channel interference and may seriously affect the service quality of neighboring femtocells. In addition, the dominant transmit power of a femtocell will induce co-tier interference to neighboring femtocells. Thus to jointly handle co-tier and co-channel interference, we propose an interference-free power and resource block allocation (IFPRBA) algorithm for closely located, closed access femtocells. Based on neighboring list, inter-femto-base station distance and uplink noise power, the IFPRBA algorithm assigns non-interfering power and resource to femtocells. The IFPRBA algorithm also guarantees the quality of service to femtouser based on the knowledge of resource requirement, connection type, and the tolerable delay budget. Simulation result shows that the interference power experienced in IFPRBA algorithm is below the tolerable interference power and hence the overall service success ratio, PRB efficiency and network throughput are maximum when compared to conventional resource allocation framework for femtocell (RAFF) algorithm.

Keywords: Co-channel interference, co-tier interference, femtocells, guaranteed QoS, power optimization, resource assignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2455
6950 Rough Set Based Intelligent Welding Quality Classification

Authors: L. Tao, T. J. Sun, Z. H. Li

Abstract:

The knowledge base of welding defect recognition is essentially incomplete. This characteristic determines that the recognition results do not reflect the actual situation. It also has a further influence on the classification of welding quality. This paper is concerned with the study of a rough set based method to reduce the influence and improve the classification accuracy. At first, a rough set model of welding quality intelligent classification has been built. Both condition and decision attributes have been specified. Later on, groups of the representative multiple compound defects have been chosen from the defect library and then classified correctly to form the decision table. Finally, the redundant information of the decision table has been reducted and the optimal decision rules have been reached. By this method, we are able to reclassify the misclassified defects to the right quality level. Compared with the ordinary ones, this method has higher accuracy and better robustness.

Keywords: intelligent decision, rough set, welding defects, welding quality level

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
6949 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

Authors: Isao Taguchi, Yasuo Sugai

Abstract:

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
6948 Perturbation Based Search Method for Solving Unconstrained Binary Quadratic Programming Problem

Authors: Muthu Solayappan, Kien Ming Ng, Kim Leng Poh

Abstract:

This paper presents a perturbation based search method to solve the unconstrained binary quadratic programming problem. The proposed algorithm was tested with some of the standard test problems and the results are reported for 10 instances of 50, 100, 250, & 500 variable problems. A comparison of the performance of the proposed algorithm with other heuristics and optimization software is made. Based on the results, it was found that the proposed algorithm is computationally inexpensive and the solutions obtained match the best known solutions for smaller sized problems. For larger instances, the algorithm is capable of finding a solution within 0.11% of the best known solution. Apart from being used as a stand-alone method, this algorithm could also be incorporated with other heuristics to find better solutions.

Keywords: unconstrained binary quadratic programming, perturbation, interior point methods

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
6947 Effect of Cowpea (Vigna sinensis L.) with Maize (Zea mays L.) Intercropping on Yield and Its Components

Authors: W. A. Hamd Alla, E. M. Shalaby, R. A. Dawood, A. A. Zohry

Abstract:

A field experiment was carried out at Arab El- Awammer Research Station, Agric. Res. Center. Assiut Governorate during summer seasons of 2013 and 2014. The present study assessed the effect of cowpea with maize intercropping on yield and its components. The experiment comprised of three treatments (sole cowpea, sole maize and cowpea-maize intercrop). The experimental design was a randomized complete block with four replications. Results indicated that intercropped maize plants with cowpea, exhibited greater potentiality and resulted in higher values of most of the studied criteria viz., plant height, number of ears/plant, number of rows/ear, number of grains/row, grains weight/ear, 100–grain weight and straw and grain yields. Fresh and dry forage yields of cowpea were lower in intercropping with maize than sole. Furthermore, the combined of the two seasons revealed that the total Land Equivalent Ratio (LER) between cowpea and maize was 1.65. The Aggressivity (A) maize was 0.45 and cowpea was -0.45. This showed that maize was the dominant crop, whereas cowpea was the dominated. The Competitive Ratio (CR) indicated that maize more competitive than cowpea, maize was 1.75 and cowpea was 0.57. The Actual Yield Loss (AYL) maize was 0.05 and cowpea was -0.40. The Monetary Advantage Index (MAI) was 2360.80.

Keywords: Intercropping, cowpea, maize, land equivalent ratio (LER).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5179
6946 Semi-Automatic Artifact Rejection Procedure Based on Kurtosis, Renyi's Entropy and Independent Component Scalp Maps

Authors: Antonino Greco, Nadia Mammone, Francesco Carlo Morabito, Mario Versaci

Abstract:

Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.

Keywords: Artifact, EEG, Renyi's entropy, kurtosis, independent component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
6945 A Numerical Algorithm for Positive Solutions of Concave and Convex Elliptic Equation on R2

Authors: Hailong Zhu, Zhaoxiang Li

Abstract:

In this paper we investigate numerically positive solutions of the equation -Δu = λuq+up with Dirichlet boundary condition in a boundary domain ╬® for λ > 0 and 0 < q < 1 < p < 2*, we will compute and visualize the range of λ, this problem achieves a numerical solution.

Keywords: positive solutions, concave-convex, sub-super solution method, pseudo arclength method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1305
6944 Software Tools for System Identification and Control using Neural Networks in Process Engineering

Authors: J. Fernandez de Canete, S. Gonzalez-Perez, P. del Saz-Orozco

Abstract:

Neural networks offer an alternative approach both for identification and control of nonlinear processes in process engineering. The lack of software tools for the design of controllers based on neural network models is particularly pronounced in this field. SIMULINK is properly a widely used graphical code development environment which allows system-level developers to perform rapid prototyping and testing. Such graphical based programming environment involves block-based code development and offers a more intuitive approach to modeling and control task in a great variety of engineering disciplines. In this paper a SIMULINK based Neural Tool has been developed for analysis and design of multivariable neural based control systems. This tool has been applied to the control of a high purity distillation column including non linear hydrodynamic effects. The proposed control scheme offers an optimal response for both theoretical and practical challenges posed in process control task, in particular when both, the quality improvement of distillation products and the operation efficiency in economical terms are considered.

Keywords: Distillation, neural networks, software tools, identification, control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2696
6943 Analysis of Residual Strain and Stress Distributions in High Speed Milled Specimens using an Indentation Method

Authors: Felipe V. Díaz, Claudio A. Mammana, Armando P. M. Guidobono, Raúl E. Bolmaro

Abstract:

Through a proper analysis of residual strain and stress distributions obtained at the surface of high speed milled specimens of AA 6082–T6 aluminium alloy, the performance of an improved indentation method is evaluated. This method integrates a special device of indentation to a universal measuring machine. The mentioned device allows introducing elongated indents allowing to diminish the absolute error of measurement. It must be noted that the present method offers the great advantage of avoiding both the specific equipment and highly qualified personnel, and their inherent high costs. In this work, the cutting tool geometry and high speed parameters are selected to introduce reduced plastic damage. Through the variation of the depth of cut, the stability of the shapes adopted by the residual strain and stress distributions is evaluated. The results show that the strain and stress distributions remain unchanged, compressive and small. Moreover, these distributions reveal a similar asymmetry when the gradients corresponding to conventional and climb cutting zones are compared.

Keywords: Residual strain, residual stress, high speed milling, indentation methods, aluminium alloys.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
6942 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening

Authors: Martin Cermak, Stanislav Sysala

Abstract:

In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.

Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
6941 The Solution of the Direct Problem of Electrical Prospecting with Direct Current under Conditions of Ground Surface Relief

Authors: Balgaisha Mukanova, Tolkyn Mirgalikyzy

Abstract:

Theory of interpretation of electromagnetic fields studied in the electrical prospecting with direct current is mainly developed for the case of a horizontal surface observation. However in practice we often have to work in difficult terrain surface. Conducting interpretation without the influence of topography can cause non-existent anomalies on sections. This raises the problem of studying the impact of different shapes of ground surface relief on the results of electrical prospecting's research. This research examines the numerical solutions of the direct problem of electrical prospecting for two-dimensional and three-dimensional media, taking into account the terrain. The problem is solved using the method of integral equations. The density of secondary currents on the relief surface is obtained.

Keywords: Ground surface relief, method of integral equations, numerical method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
6940 The Use of Process-Oriented Methods of Calculation to Determine the Costs of Logistics Processes

Authors: Tomas Cechura, Michal Simon

Abstract:

The aim of this paper is to create a proposal for determining the costs of logistics processes by using process-oriented calculation methods. The traditional approach is that logistics costs are part of manufacturing overhead which is usually calculated as a percentage surcharge. Therefore in the traditional approach it is not obvious where and in which activities costs were incurred. So it is impossible to trace logistics costs to products. Our point of view is trying to fix or at least improve this issue. Another benefit of applying the process approach is identification of logistics processes which are otherwise part of manufacturing overhead. In the first part this paper describes the development of process-oriented methods over time. The next part shows the possibility of implementing the process-oriented method called Prozesskostenrechnung to logistics processes. The conclusion summarizes advantages and disadvantages of using this method in logistics.

Keywords: Cost, logistics, calculation, process-oriented method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
6939 A Stereo Vision System for Top View Book Scanners

Authors: Erik Lilienblum, Robert Niese, Bernd Michaelis

Abstract:

This paper proposes a novel stereo vision technique for top view book scanners which provide us with dense 3d point clouds of page surfaces. This is a precondition to dewarp bound volumes independent of 2d information on the page. Our method is based on algorithms, which normally require the projection of pattern sequences with structured light. We use image sequences of the moving stripe lighting of the top view scanner instead of an additional light projection. Thus the stereo vision setup is simplified without losing measurement accuracy. Furthermore we improve a surface model dewarping method through introducing a difference vector based on real measurements. Although our proposed method is hardly expensive neither in calculation time nor in hardware requirements we present good dewarping results even for difficult examples.

Keywords: stereo vision, 3d surface reconstruction, dewarpingdocuments, book scanner

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
6938 Unequal Error Protection for Region of Interest with Embedded Zerotree Wavelet

Authors: T. Hirner, J. Polec

Abstract:

This paper describes a new method of unequal error protection (UEP) for region of interest (ROI) with embedded zerotree wavelet algorithm (EZW). ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the unequal error protection (UEP) of image transmission with defined ROI and the equal error protection (EEP) over multiple noisy channels.

Keywords: embedded zerotree wavelet (EZW), equal error protection (EEP), region of interest (ROI), RS code, unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
6937 A New Method to Enhance Contrast of Electron Micrograph of Rat Tissues Sections

Authors: Lise P. Labéjof, Raiza S. P. Bizerra, Galileu B. Costa, Thaísa B. dos Santos

Abstract:

This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electron micrograph of tissue sections have an acceptable contrast compared to other methods and present no artifact of precipitation on sections. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.

Keywords: Image quality, Microscopy research, Staining technique, Ultrathin section.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
6936 Usability in E-Commerce Websites: Results of Eye Tracking Evaluations

Authors: Beste Kaysı, Yasemin Topaloğlu

Abstract:

Usability is one of the most important quality attributes for web-based information systems. Specifically, for e-commerce applications, usability becomes more prominent. In this study, we aimed to explore the features that experienced users seek in e-commerce applications. We used eye tracking method in evaluations. Eye movement data are obtained from the eye-tracking method and analyzed based on task completion time, number of fixations, as well as heat map and gaze plot measures. The results of the analysis show that the eye movements of participants' are too static in certain areas and their areas of interest are scattered in many different places. It has been determined that this causes users to fail to complete their transactions. According to the findings, we outlined the issues to improve the usability of e-commerce websites. Then we propose solutions to identify the issues. In this way, it is expected that e-commerce sites will be developed which will make experienced users more satisfied.

Keywords: E-commerce websites, eye tracking method, usability, website evaluations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
6935 Detection of Clipped Fragments in Speech Signals

Authors: Sergei Aleinik, Yuri Matveev

Abstract:

In this paper a novel method for the detection of  clipping in speech signals is described. It is shown that the new  method has better performance than known clipping detection  methods, is easy to implement, and is robust to changes in signal  amplitude, size of data, etc. Statistical simulation results are  presented.

 

Keywords: Clipping, clipped signal, speech signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2656
6934 Study on the Effect of Sulphur, Glucose, Nitrogen and Plant Residues on the Immobilization of Sulphate-S in Soil

Authors: S. Shahsavani, A. Gholami

Abstract:

In order to evaluate the relationship between the sulphur (S), glucose (G), nitrogen (N) and plant residues (st), sulphur immobilization and microbial transformation were monitored in five soil samples from 0-30 cm of Bastam farmers fields of Shahrood area following 11 treatments with different levels of Sulphur (S), glucose (G), N and plant residues (wheat straw) in a randomized block design with three replications and incubated over 20, 45 and 60 days, the immobilization of SO4 -2-S presented as a percentage of that added, was inversely related to its addition rate. Additions of glucose and plant residues increased with the C-to-S ratio of the added amendments, irrespective of their origins (glucose and plant residues). In the presence of C sources (glucose or plant residues). N significantly increased the immobilization of SO4 -2-S, whilst the effect of N was insignificant in the absence of a C amendment. In first few days the amounts of added SO4 -2-S immobilized were linearly correlated with the amounts of added S recovered in the soil microbial biomass. With further incubation the proportions of immobilized SO4 -2-S remaining as biomass-S decreased. Decrease in biomass-S was thought to be due to the conversion of biomass-S into soil organic-S. Glucose addition increased the immobilization (microbial utilization and incorporation into the soil organic matter) of native soil SO4 -2-S. However, N addition enhance the mineralization of soil organic-S, increasing the concentration of SO4 - 2-S in soil.

Keywords: Immobilization, microbial biomass, sulphur, nitrogen, glucose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
6933 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa

Abstract:

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2570
6932 Circuit Models for Conducted Susceptibility Analyses of Multiconductor Shielded Cables

Authors: Saih Mohamed, Rouijaa Hicham, Ghammaz Abdelilah

Abstract:

This paper presents circuit models to analyze the conducted susceptibility of multiconductor shielded cables in frequency domains using Branin’s method, which is referred to as the method of characteristics. These models, which can be used directly in the time and frequency domains, take into account the presence of both the transfer impedance and admittance. The conducted susceptibility is studied by using an injection current on the cable shield as the source. Two examples are studied; a coaxial shielded cable and shielded cables with two parallel wires (i.e., twinax cables). This shield has an asymmetry (one slot on the side). Results obtained by these models are in good agreement with those obtained by other methods.

Keywords: Circuit models, multiconductor shielded cables, Branin’s method, coaxial shielded cable, twinax cables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2497
6931 Fixture Layout Optimization Using Element Strain Energy and Genetic Algorithm

Authors: Zeshan Ahmad, Matteo Zoppi, Rezia Molfino

Abstract:

The stiffness of the workpiece is very important to reduce the errors in manufacturing process. The high stiffness of the workpiece can be achieved by optimal positioning of fixture elements in the fixture. The minimization of the sum of the nodal deflection normal to the surface is used as objective function in previous research. The deflection in other direction has been neglected. The 3-2-1 fixturing principle is not valid for metal sheets due to its flexible nature. We propose a new fixture layout optimization method N-3-2-1 for metal sheets that uses the strain energy of the finite elements. This method combines the genetic algorithm and finite element analysis. The objective function in this method is to minimize the sum of all the element strain energy. By using the concept of element strain energy, the deformations in all the directions have been considered. Strain energy and stiffness are inversely proportional to each other. So, lower the value of strain energy, higher will be the stiffness. Two different kinds of case studies are presented. The case studies are solved for both objective functions; element strain energy and nodal deflection. The result are compared to verify the propose method.

Keywords: Fixture layout, optimization, fixturing element, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2548
6930 Effect of Concrete Nonlinear Parameters on the Seismic Response of Concrete Gravity Dams

Authors: Z. Heirany, M. Ghaemian

Abstract:

Behavior of dams against the seismic loads has been studied by many researchers. Most of them proposed new numerical methods to investigate the dam safety. In this paper, to study the effect of nonlinear parameters of concrete in gravity dams, a twodimensional approach was used including the finite element method, staggered method and smeared crack approach. Effective parameters in the models are physical properties of concrete such as modulus of elasticity, tensile strength and specific fracture energy. Two different models were used in foundation (mass-less and massed) in order to determine the seismic response of concrete gravity dams. Results show that when the nonlinear analysis includes the dam- foundation interaction, the foundation-s mass, flexibility and radiation damping are important in gravity dam-s response.

Keywords: Numerical methods; concrete gravity dams; finiteelement method; boundary condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320
6929 Identification of Nonlinear Systems Using Radial Basis Function Neural Network

Authors: C. Pislaru, A. Shebani

Abstract:

This paper uses the radial basis function neural network (RBFNN) for system identification of nonlinear systems. Five nonlinear systems are used to examine the activity of RBFNN in system modeling of nonlinear systems; the five nonlinear systems are dual tank system, single tank system, DC motor system, and two academic models. The feed forward method is considered in this work for modelling the non-linear dynamic models, where the KMeans clustering algorithm used in this paper to select the centers of radial basis function network, because it is reliable, offers fast convergence and can handle large data sets. The least mean square method is used to adjust the weights to the output layer, and Euclidean distance method used to measure the width of the Gaussian function.

Keywords: System identification, Nonlinear system, Neural networks, RBF neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842