Search results for: inverse methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4193

Search results for: inverse methods

3773 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain

Authors: Harpreet Kaur, Sudipta Majumdar

Abstract:

In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images.  As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.

Keywords: Dark channel prior, image dehazing, lifting wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086
3772 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT

Authors: Say Wei Foo, Qi Dong

Abstract:

Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.

Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2064
3771 Computable Function Representations Using Effective Chebyshev Polynomial

Authors: Mohammed A. Abutheraa, David Lester

Abstract:

We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.

Keywords: Approximation Theory, Chebyshev Polynomial, Computable Functions, Computable Real Arithmetic, Integration, Numerical Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3051
3770 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: Data fusion, Dempster-Shafer theory, data mining, event detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
3769 Adhesion Strength Evaluation Methods in Thermally Sprayed Coatings

Authors: M.Jalali Azizpour, H.Mohammadi majd, Milad Jalali, H.Fasihi

Abstract:

The techniques for estimating the adhesive and cohesive strength in high velocity oxy fuel (HVOF) thermal spray coatings have been discussed and compared. The development trend and the last investigation have been studied. We will focus on benefits and limitations of these methods in different process and materials.

Keywords: Adhesion, Bonding strength, Cohesion, HVOF Thermal spray

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3108
3768 Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding

Authors: R. Krishnamoorthi, N. Kannan

Abstract:

In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.

Keywords: Orthogonal Polynomials, Image Coding, Vector Quantization, TSVQ, Binary Tree Classifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
3767 Comparison of Experimental Relationships to Determine Flow Discharge in Meandering Compound Channels Using M5 Decision Tree Model

Authors: Mehdi Kheradmand, Mehdi Azhdary Moghaddam, Abdolreza Zahiri, Khalil Ghorbani

Abstract:

This research compares results of major methods of determining the flow discharge using experimental relationships with results from the M5 decision tree model in meandering compound sections in several laboratory channels. It was found that the M5 decision tree model enjoyed greater accuracy of statistical parameters compared to methods to the said methods. This suggested that the M5 decision tree model has highly improved the calculated accuracy of the flow discharge in meandering compound channels.

Keywords: Stage-discharge relationship, M5 decision tree model, compound section, meandering compound channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174
3766 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: Imperfect channel state information, outage probability, multiuser- multi input single output.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1090
3765 Calibration of Syringe Pumps Using Interferometry and Optical Methods

Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins

Abstract:

Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.

Keywords: Calibration, interferometry, syringe pump, optical method, uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
3764 Generation of Sets of Synthetic Classifiers for the Evaluation of Abstract-Level Combination Methods

Authors: N. Greco, S. Impedovo, R.Modugno, G. Pirlo

Abstract:

This paper presents a new technique for generating sets of synthetic classifiers to evaluate abstract-level combination methods. The sets differ in terms of both recognition rates of the individual classifiers and degree of similarity. For this purpose, each abstract-level classifier is considered as a random variable producing one class label as the output for an input pattern. From the initial set of classifiers, new slightly different sets are generated by applying specific operators, which are defined at the purpose. Finally, the sets of synthetic classifiers have been used to estimate the performance of combination methods for abstract-level classifiers. The experimental results demonstrate the effectiveness of the proposed approach.

Keywords: Abstract-level Classifier, Dempster-Shafer Rule, Multi-expert Systems, Similarity Index, System Evaluation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
3763 High-Efficiency Comparator for Low-Power Application

Authors: M. Yousefi, N. Nasirzadeh

Abstract:

In this paper, dynamic comparator structure employing two methods for power consumption reduction with applications in low-power high-speed analog-to-digital converters have been presented. The proposed comparator has low consumption thanks to power reduction methods. They have the ability for offset adjustment. The comparator consumes 14.3 μW at 100 MHz which is equal to 11.8 fJ. The comparator has been designed and simulated in 180 nm CMOS. Layouts occupy 210 μm2.

Keywords: Comparator, low, power, efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
3762 Vibration Induced Fatigue Assessment in Vehicle Development Process

Authors: Fatih Kagnici

Abstract:

Improvement in CAE methods has an important role for shortening of the vehicle product development time. It is provided that validation of the design and improvements in terms of durability can be done without hardware prototype production. In recent years, several different methods have been developed in order to investigate fatigue damage of the vehicle. The intended goal among these methods is prediction of fatigue damage in a short time with reduced costs. This study developed a new fatigue damage prediction method in the automotive sector using power spectrum densities of accelerations. This study also confirmed that the weak region in vehicle can be easily detected with the method developed in this study which results were compared with conventional method.

Keywords: Fatigue damage, Power spectrum density, Vibration induced fatigue, Vehicle development

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3098
3761 Instability of Ties in Compression

Authors: T. Cornelius

Abstract:

Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis.

Keywords: Masonry, tie connectors, cavity wall, instability, differential movements, combined bending and compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
3760 3D Face Recognition Using Modified PCA Methods

Authors: Omid Gervei, Ahmad Ayatollahi, Navid Gervei

Abstract:

In this paper we present an approach for 3D face recognition based on extracting principal components of range images by utilizing modified PCA methods namely 2DPCA and bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing stage was implemented on the images to smooth them using median and Gaussian filtering. In the normalization stage we locate the nose tip to lay it at the center of images then crop each image to a standard size of 100*100. In the face recognition stage we extract the principal component of each image using both 2DPCA and (2D) 2 PCA. Finally, we use Euclidean distance to measure the minimum distance between a given test image to the training images in the database. We also compare the result of using both methods. The best result achieved by experiments on a public face database shows that 83.3 percent is the rate of face recognition for a random facial expression.

Keywords: 3D face recognition, 2DPCA, (2D) 2 PCA, Rangeimage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3036
3759 The Coupling of Photocatalytic Oxidation Processes with Activated Carbon Technologies and the Comparison of the Treatment Methods for Organic Removal from Surface Water

Authors: N. Areerachakul

Abstract:

The surface water used in this study was collected from the Chao Praya River at the lower part at the Nonthaburi bridge. It was collected and used throughout the experiment. TOC (also known as DOC) in the range between 2.5 to 5.6 mg/l were investigated in this experiment. The use of conventional treatment methods such as FeCl3 and PAC showed that TOC removal was 65% using FeCl3 and 78% using PAC (powder activated carbon). The advanced oxidation process alone showed only 35% removal of TOC. Coupling advanced oxidation with a small amount of PAC (0.05g/L) increased efficiency by upto 55%. The combined BAC with advanced oxidation process and small amount of PAC demonstrated the highest efficiency of up to 95% of TOC removal and lower sludge production compared with other methods.

Keywords: Advanced oxidation process, TOC, PAC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
3758 On Dialogue Systems Based on Deep Learning

Authors: Yifan Fan, Xudong Luo, Pingping Lin

Abstract:

Nowadays, dialogue systems increasingly become the way for humans to access many computer systems. So, humans can interact with computers in natural language. A dialogue system consists of three parts: understanding what humans say in natural language, managing dialogue, and generating responses in natural language. In this paper, we survey deep learning based methods for dialogue management, response generation and dialogue evaluation. Specifically, these methods are based on neural network, long short-term memory network, deep reinforcement learning, pre-training and generative adversarial network. We compare these methods and point out the further research directions.

Keywords: Dialogue management, response generation, reinforcement learning, deep learning, evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
3757 Locating Critical Failure Surface in Rock Slope Stability with Hybrid Model Based on Artificial Immune System and Cellular Learning Automata (CLA-AIS)

Authors: Ramin Javadzadeh, Emad Javadzadeh

Abstract:

Locating the critical slip surface with the minimum factor of safety for a rock slope is a difficult problem. In recent years, some modern global optimization methods have been developed with success in treating various types of problems, but very few of such methods have been applied to rock mechanical problems. In this paper, use of hybrid model based on artificial immune system and cellular learning automata is proposed. The results show that the algorithm is an effective and efficient optimization method with a high level of confidence rate.

Keywords: CLA-AIS, failure surface, optimization methods, rock slope.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3756 Blow up in Polynomial Differential Equations

Authors: Rudolf Csikja, Janos Toth

Abstract:

Methods to detect and localize time singularities of polynomial and quasi-polynomial ordinary differential equations are systematically presented and developed. They are applied to examples taken form different fields of applications and they are also compared to better known methods such as those based on the existence of linear first integrals or Lyapunov functions.

Keywords: blow up, finite escape time, polynomial ODE, singularity, Lotka–Volterra equation, Painleve analysis, Ψ-series, global existence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
3755 Retrieval of Relevant Visual Data in Selected Machine Vision Tasks: Examples of Hardware-based and Software-based Solutions

Authors: Andrzej Śluzek

Abstract:

To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.

Keywords: Relevant visual data, gated imaging, intrusion detection, image matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
3754 Differentiation of Heart Rate Time Series from Electroencephalogram and Noise

Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, Paul Joseph K.

Abstract:

Analysis of heart rate variability (HRV) has become a popular non-invasive tool for assessing the activities of autonomic nervous system. Most of the methods were hired from techniques used for time series analysis. Currently used methods are time domain, frequency domain, geometrical and fractal methods. A new technique, which searches for pattern repeatability in a time series, is proposed for quantifying heart rate (HR) time series. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are able to distinguish HR data clearly from noise and electroencephalogram (EEG). The results of analysis using these measures give an insight into the fundamental difference between the composition of HR time series with respect to EEG and noise.

Keywords: Approximate entropy, heart rate variability, noise, pattern repeatability, and sample entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
3753 The Role of Banks Funding and Promoting the Foreign Trade: Case of Turkey

Authors: Mikail Altan

Abstract:

International trust takes first place in the development of foreign trade in the country. They see an important role in ensuring that trust. Various payment methods that are developed in the banking system provide fast and reliable way to execution and promote foreign trade by financing the foreign trade. In this study, we investigate the influence of bank on foreign trade in Turkey. 26 years of data for 1990-2015 period have been used in this study. After correlation analysis, a simple regression model was established. Payment methods that are developed in the banking system make a positive contribution in Turkey’s foreign trade volume. In addition, the export of Turkey was affected positively more than import’s by these payment methods.

Keywords: Banks, export, foreign trade, import.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
3752 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis

Authors: Hyun-Ho Lee, Kee-Won Kim

Abstract:

The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.

Keywords: Finite field, Montgomery multiplication, systolic array, cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
3751 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
3750 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid

Abstract:

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
3749 A Comparison of Adaline and MLP Neural Network based Predictors in SIR Estimation in Mobile DS/CDMA Systems

Authors: Nahid Ardalani, Ahmadreza Khoogar, H. Roohi

Abstract:

In this paper we compare the response of linear and nonlinear neural network-based prediction schemes in prediction of received Signal-to-Interference Power Ratio (SIR) in Direct Sequence Code Division Multiple Access (DS/CDMA) systems. The nonlinear predictor is Multilayer Perceptron MLP and the linear predictor is an Adaptive Linear (Adaline) predictor. We solve the problem of complexity by using the Minimum Mean Squared Error (MMSE) principle to select the optimal predictors. The optimized Adaline predictor is compared to optimized MLP by employing noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an urban environment. The results show that the Adaline predictor can estimates SIR with the same error as MLP when the user has the velocity of 5 km/h and 60 km/h but by increasing the velocity up-to 120 km/h the mean squared error of MLP is two times more than Adaline predictor. This makes the Adaline predictor (with lower complexity) more suitable than MLP for closed-loop power control where efficient and accurate identification of the time-varying inverse dynamics of the multi path fading channel is required.

Keywords: Power control, neural networks, DS/CDMA mobilecommunication systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
3748 SMART: Solution Methods with Ants Running by Types

Authors: Nicolas Zufferey

Abstract:

Ant algorithms are well-known metaheuristics which have been widely used since two decades. In most of the literature, an ant is a constructive heuristic able to build a solution from scratch. However, other types of ant algorithms have recently emerged: the discussion is thus not limited by the common framework of the constructive ant algorithms. Generally, at each generation of an ant algorithm, each ant builds a solution step by step by adding an element to it. Each choice is based on the greedy force (also called the visibility, the short term profit or the heuristic information) and the trail system (central memory which collects historical information of the search process). Usually, all the ants of the population have the same characteristics and behaviors. In contrast in this paper, a new type of ant metaheuristic is proposed, namely SMART (for Solution Methods with Ants Running by Types). It relies on the use of different population of ants, where each population has its own personality.

Keywords: Optimization, Metaheuristics, Ant Algorithms, Evolutionary Procedures, Population-Based Methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
3747 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots

Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.

Keywords: Agricultural mobile robot, charging base methods, battery replacement method, wireless charging method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
3746 Oxidation of Amitriptyline by Bromamine-T in Acidic Buffer Medium: A Kinetic and Mechanistic Approach

Authors: Chandrashekar, R. T. Radhika, B. M. Venkatesha, S. Ananda, Shivalingegowda, T. S. Shashikumar, H. Ramachandra

Abstract:

The kinetics of the oxidation of amitriptyline (AT) by sodium N-bromotoluene sulphonamide (C6H5SO2NBrNa) has been studied in an acidic buffer medium of pH 1.2 at 303 K. The oxidation reaction of AT was followed spectrophotometrically at maximum wavelength, 410 nm. The reaction rate shows a first order dependence each on concentration of AT and concentration of sodium N-bromotoluene sulphonamide. The reaction also shows an inverse fractional order dependence at low or high concentration of HCl. The dielectric constant of the solvent shows negative effect on the rate of reaction. The addition of halide ions and the reduction product of BAT have no significant effect on the rate. The rate is unchanged with the variation in the ionic strength (NaClO4) of the medium. Addition of reaction mixtures to be aqueous acrylamide solution did not initiate polymerization, indicating the absence of free radical species. The stoichiometry of the reaction was found to be 1:1 and oxidation product of AT is identified. The Michaelis-Menton type of kinetics has been proposed. The CH3C6H5SO2NHBr has been assumed to be the reactive oxidizing species. Thermodynamical parameters were computed by studying the reactions at different temperatures. A mechanism consistent with observed kinetics is presented.

Keywords: Amitriptyline, bromamine-T, kinetics, oxidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
3745 A High Performance Technique in Harmonic Omitting Based on Predictive Current Control of a Shunt Active Power Filter

Authors: K. G. Firouzjah, A. Sheikholeslami

Abstract:

The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.

Keywords: Active filter, predictive current control, low pass filter, harmonic omitting, o–d–q reference frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
3744 Estimating Word Translation Probabilities for Thai – English Machine Translation using EM Algorithm

Authors: Chutchada Nusai, Yoshimi Suzuki, Haruaki Yamazaki

Abstract:

Selecting the word translation from a set of target language words, one that conveys the correct sense of source word and makes more fluent target language output, is one of core problems in machine translation. In this paper we compare the 3 methods of estimating word translation probabilities for selecting the translation word in Thai – English Machine Translation. The 3 methods are (1) Method based on frequency of word translation, (2) Method based on collocation of word translation, and (3) Method based on Expectation Maximization (EM) algorithm. For evaluation we used Thai – English parallel sentences generated by NECTEC. The method based on EM algorithm is the best method in comparison to the other methods and gives the satisfying results.

Keywords: Machine translation, EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648