Search results for: Backward MPSD iterative matrix
810 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588809 A Kernel Classifier using Linearised Bregman Iteration
Authors: K. A. D. N. K Wimalawarne
Abstract:
In this paper we introduce a novel kernel classifier based on a iterative shrinkage algorithm developed for compressive sensing. We have adopted Bregman iteration with soft and hard shrinkage functions and generalized hinge loss for solving l1 norm minimization problem for classification. Our experimental results with face recognition and digit classification using SVM as the benchmark have shown that our method has a close error rate compared to SVM but do not perform better than SVM. We have found that the soft shrinkage method give more accuracy and in some situations more sparseness than hard shrinkage methods.Keywords: Compressive sensing, Bregman iteration, Generalisedhinge loss, sparse, kernels, shrinkage functions
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382808 Unique Positive Solution of Nonlinear Fractional Differential Equation Boundary Value Problem
Authors: Fengxia Zheng
Abstract:
By using two new fixed point theorems for mixed monotone operators, the positive solution of nonlinear fractional differential equation boundary value problem is studied. Its existence and uniqueness is proved, and an iterative scheme is constructed to approximate it.
Keywords: Fractional differential equation, boundary value problem, positive solution, existence and uniqueness, fixed point theorem, mixed monotone operator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617807 Development and Characterization of a Polymer Composite Electrolyte to Be Used in Proton Exchange Membranes Fuel Cells
Authors: B. A. Berns, V. Romanovicz, M. M. de Camargo Forte, D. E. O. S. Carpenter
Abstract:
The Proton Exchange Membranes (PEM) are largely studied because they operate at low temperatures and they are suitable for mobile applications. However, there are some deficiencies in their operation, mainly those that use ethanol as a hydrogen source, that require a certain attention. Therefore, this research aimed to develop Nafion® composite membranes, mixing clay minerals, kaolin and halloysite to the polymer matrix in order to improve the ethanol molecule retentions and, at the same time, to keep the system’s protonic conductivity. The modified Nafion/Kaolin, Nafion/Halloysite composite membranes were prepared in weight proportion of 0.5, 1.0 and 1.5. The membranes obtained were characterized as to their ethanol permeability, protonic conductivity and water absorption. The composite morphology and structure are characterized by SEM and EDX and the thermal behavior is determined by TGA and DSC. The analysis of the results shows ethanol permeability reduction from 48% to 63%. However, the protonic conductivity results are lower in relation to pure Nafion®. As to the thermal behavior, the Nafion® composite membranes were stable up to a temperature of 325ºC.
Keywords: Polymer-matrix composites (PMCs), Thermal properties, Nanoclay, Differential scanning calorimetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2555806 Analytical Analysis of Image Representation by Their Discrete Wavelet Transform
Authors: R. M. Farouk
Abstract:
In this paper, we present an analytical analysis of the representation of images as the magnitudes of their transform with the discrete wavelets. Such a representation plays as a model for complex cells in the early stage of visual processing and of high technical usefulness for image understanding, because it makes the representation insensitive to small local shifts. We found that if the signals are band limited and of zero mean, then reconstruction from the magnitudes is unique up to the sign for almost all signals. We also present an iterative reconstruction algorithm which yields very good reconstruction up to the sign minor numerical errors in the very low frequencies.Keywords: Wavelets, Image processing signal processing, Image reconstruction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391805 Micromechanics of Stress Transfer across the Interface Fiber-Matrix Bonding
Authors: Fatiha Teklal, Bachir Kacimi, Arezki Djebbar
Abstract:
The study and application of composite materials are a truly interdisciplinary endeavor that has been enriched by contributions from chemistry, physics, materials science, mechanics and manufacturing engineering. The understanding of the interface (or interphase) in composites is the central point of this interdisciplinary effort. From the early development of composite materials of various nature, the optimization of the interface has been of major importance. Even more important, the ideas linking the properties of composites to the interface structure are still emerging. In our study, we need a direct characterization of the interface; the micromechanical tests we are addressing seem to meet this objective and we chose to use two complementary tests simultaneously. The microindentation test that can be applied to real composites and the drop test, preferred to the pull-out because of the theoretical possibility of studying systems with high adhesion (which is a priori the case with our systems). These two tests are complementary because of the principle of the model specimen used for both the first "compression indentation" and the second whose fiber is subjected to tensile stress called the drop test. Comparing the results obtained by the two methods can therefore be rewarding.Keywords: Interface, micromechanics, pull-out, composite, fiber, matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 633804 Feeder Reconfiguration for Loss Reduction in Unbalanced Distribution System Using Genetic Algorithm
Authors: Ganesh. Vulasala, Sivanagaraju. Sirigiri, Ramana. Thiruveedula
Abstract:
This paper presents an efficient approach to feeder reconfiguration for power loss reduction and voltage profile imprvement in unbalanced radial distribution systems (URDS). In this paper Genetic Algorithm (GA) is used to obtain solution for reconfiguration of radial distribution systems to minimize the losses. A forward and backward algorithm is used to calculate load flows in unbalanced distribution systems. By simulating the survival of the fittest among the strings, the optimum string is searched by randomized information exchange between strings by performing crossover and mutation. Results have shown that proposed algorithm has advantages over previous algorithms The proposed method is effectively tested on 19 node and 25 node unbalanced radial distribution systems.Keywords: Distribution system, Load flows, Reconfiguration, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3255803 Adaptive Kernel Principal Analysis for Online Feature Extraction
Authors: Mingtao Ding, Zheng Tian, Haixia Xu
Abstract:
The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556802 Machine Learning Approach for Identifying Dementia from MRI Images
Authors: S. K. Aruna, S. Chitra
Abstract:
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123801 Comparative Study on Recent Integer DCTs
Authors: Sakol Udomsiri, Masahiro Iwahashi
Abstract:
This paper presents comparative study on recent integer DCTs and a new method to construct a low sensitive structure of integer DCT for colored input signals. The method refers to sensitivity of multiplier coefficients to finite word length as an indicator of how word length truncation effects on quality of output signal. The sensitivity is also theoretically evaluated as a function of auto-correlation and covariance matrix of input signal. The structure of integer DCT algorithm is optimized by combination of lower sensitive lifting structure types of IRT. It is evaluated by the sensitivity of multiplier coefficients to finite word length expression in a function of covariance matrix of input signal. Effectiveness of the optimum combination of IRT in integer DCT algorithm is confirmed by quality improvement comparing with existing case. As a result, the optimum combination of IRT in each integer DCT algorithm evidently improves output signal quality and it is still compatible with the existing one.Keywords: DCT, sensitivity, lossless, wordlength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384800 Promising Immobilization of Cadmium and Lead inside Ca-rich Glass-ceramics
Authors: A. Karnis, L. Gautron
Abstract:
Considering toxicity of heavy metals and their accumulation in domestic wastes, immobilization of lead and cadmium is envisaged inside glass-ceramics. We particularly focused this work on calcium-rich phases embedded in a glassy matrix. Glass-ceramics were synthesized from glasses doped with 12 wt% and 16 wt% of PbO or CdO. They were observed and analyzed by Electron MicroProbe Analysis (EMPA) and Analytical Scanning Electron Microscopy (ASEM). Structural characterization of the samples was performed by powder XRay Diffraction. Diopside crystals of CaMgSi2O6 composition are shown to incorporate significant amounts of cadmium (up to 9 wt% of CdO). Two new crystalline phases are observed with very high Cd or Pb contents: about 40 wt% CdO for the cadmiumrich phase and near 60 wt% PbO for the lead-rich phase. We present complete chemical and structural characterization of these phases. They represent a promising way for the immobilization of toxic elements like Cd or Pb since glass ceramics are known to propose a “double barrier" protection (metal-rich crystals embedded in a glass matrix) against metal release in the environment.Keywords: Cadmium, Calcium-rich phases, Diopside, Domesticwastes, Fly ashes, Glass-ceramics, Lead, Municipal Solid WasteIncineration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662799 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916798 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: Base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066797 Improved IDR(s) Method for Gaining Very Accurate Solutions
Authors: Yusuke Onoue, Seiji Fujino, Norimasa Nakashima
Abstract:
The IDR(s) method based on an extended IDR theorem was proposed by Sonneveld and van Gijzen. The original IDR(s) method has excellent property compared with the conventional iterative methods in terms of efficiency and small amount of memory. IDR(s) method, however, has unexpected property that relative residual 2-norm stagnates at the level of less than 10-12. In this paper, an effective strategy for stagnation detection, stagnation avoidance using adaptively information of parameter s and improvement of convergence rate itself of IDR(s) method are proposed in order to gain high accuracy of the approximated solution of IDR(s) method. Through numerical experiments, effectiveness of adaptive tuning IDR(s) method is verified and demonstrated.
Keywords: Krylov subspace methods, IDR(s), adaptive tuning, stagnation of relative residual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477796 Optimization of a Three-Term Backpropagation Algorithm Used for Neural Network Learning
Authors: Yahya H. Zweiri
Abstract:
The back-propagation algorithm calculates the weight changes of an artificial neural network, and a two-term algorithm with a dynamically optimal learning rate and a momentum factor is commonly used. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third term increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and optimization approaches for evaluating the learning parameters are required to facilitate the application of the three terms BP algorithm. This paper considers the optimization of the new back-propagation algorithm by using derivative information. A family of approaches exploiting the derivatives with respect to the learning rate, momentum factor and proportional factor is presented. These autonomously compute the derivatives in the weight space, by using information gathered from the forward and backward procedures. The three-term BP algorithm and the optimization approaches are evaluated using the benchmark XOR problem.Keywords: Neural Networks, Backpropagation, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546795 The Effect of Choke on the Efficiency of Coaxial Antenna for Percutaneous Microwave Coagulation Therapy for Hepatic Tumor
Authors: Surita Maini
Abstract:
There are many perceived advantages of microwave ablation have driven researchers to develop innovative antennas to effectively treat deep-seated, non-resectable hepatic tumors. In this paper a coaxial antenna with a miniaturized sleeve choke has been discussed for microwave interstitial ablation therapy, in order to reduce backward heating effects irrespective of the insertion depth into the tissue. Two dimensional Finite Element Method (FEM) is used to simulate and measure the results of miniaturized sleeve choke antenna. This paper emphasizes the importance of factors that can affect simulation accuracy, which include mesh resolution, surface heating and reflection coefficient. Quarter wavelength choke effectiveness has been discussed by comparing it with the unchoked antenna with same dimensions.Keywords: Microwave ablation, tumor, Finite Element Method, Coaxial slot antenna, Coaxial dipole antenna.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2615794 Optimization of Energy Consumption in Sequential Distillation Column
Authors: M.E. Masoumi, S. Kadkhodaie
Abstract:
Distillation column is one of the most common operations in process industries and is while the most expensive unit of the amount of energy consumption. Many ideas have been presented in the related literature for optimizing energy consumption in distillation columns. This paper studies the different heat integration methods in a distillation column which separate Benzene, Toluene, Xylene, and C9+. Three schemes of heat integration including, indirect sequence (IQ), indirect sequence with forward energy integration (IQF), and indirect sequence with backward energy integration (IQB) has been studied in this paper. Using shortcut method these heat integration schemes were simulated with Aspen HYSYS software and compared with each other with regarding economic considerations. The result shows that the energy consumption has been reduced 33% in IQF and 28% in IQB in comparison with IQ scheme. Also the economic result shows that the total annual cost has been reduced 12% in IQF and 8% in IQB regarding with IQ scheme. Therefore, the IQF scheme is most economic than IQB and IQ scheme.Keywords: Optimization, Distillation Column Sequence, Energy Savings
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3019793 Distributed Load Flow Analysis using Graph Theory
Authors: D. P. Sharma, A. Chaturvedi, G.Purohit , R.Shivarudraswamy
Abstract:
In today scenario, to meet enhanced demand imposed by domestic, commercial and industrial consumers, various operational & control activities of Radial Distribution Network (RDN) requires a focused attention. Irrespective of sub-domains research aspects of RDN like network reconfiguration, reactive power compensation and economic load scheduling etc, network performance parameters are usually estimated by an iterative process and is commonly known as load (power) flow algorithm. In this paper, a simple mechanism is presented to implement the load flow analysis (LFA) algorithm. The reported algorithm utilizes graph theory principles and is tested on a 69- bus RDN.Keywords: Radial Distribution network, Graph, Load-flow, Array.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3148792 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.
Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255791 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.
Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049790 A Sum Operator Method for Unique Positive Solution to a Class of Boundary Value Problem of Nonlinear Fractional Differential Equation
Authors: Fengxia Zheng, Chuanyun Gu
Abstract:
By using a fixed point theorem of a sum operator, the existence and uniqueness of positive solution for a class of boundary value problem of nonlinear fractional differential equation is studied. An iterative scheme is constructed to approximate it. Finally, an example is given to illustrate the main result.Keywords: Fractional differential equation, Boundary value problem, Positive solution, Existence and uniqueness, Fixed point theorem of a sum operator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493789 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686788 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network
Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza
Abstract:
The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.
Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540787 Parking Space Detection and Trajectory Tracking Control for Vehicle Auto-Parking
Authors: Shiuh-Jer Huang, Yu-Sheng Hsu
Abstract:
On-board available parking space detecting system, parking trajectory planning and tracking control mechanism are the key components of vehicle backward auto-parking system. Firstly, pair of ultrasonic sensors is installed on each side of vehicle body surface to detect the relative distance between ego-car and surrounding obstacle. The dimension of a found empty space can be calculated based on vehicle speed and the time history of ultrasonic sensor detecting information. This result can be used for constructing the 2D vehicle environmental map and available parking type judgment. Finally, the auto-parking controller executes the on-line optimal parking trajectory planning based on this 2D environmental map, and monitors the real-time vehicle parking trajectory tracking control. This low cost auto-parking system was tested on a model car.Keywords: Vehicle auto-parking, parking space detection, parking path tracking, intelligent fuzzy controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469786 Code-Aided Turbo Channel Estimation for OFDM Systems with NB-LDPC Codes
Authors: Ł. Januszkiewicz, G. Bacci, H. Gierszal, M. Luise
Abstract:
In this paper channel estimation techniques are considered as the support methods for OFDM transmission systems based on Non Binary LDPC (Low Density Parity Check) codes. Standard frequency domain pilot aided LS (Least Squares) and LMMSE (Linear Minimum Mean Square Error) estimators are investigated. Furthermore, an iterative algorithm is proposed as a solution exploiting the NB-LDPC channel decoder to improve the performance of the LMMSE estimator. Simulation results of signals transmitted through fading mobile channels are presented to compare the performance of the proposed channel estimators.Keywords: LDPC codes, LMMSE, OFDM, turbo channelestimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663785 Significance of Splitting Method in Non-linear Grid system for the Solution of Navier-Stokes Equation
Abstract:
Solution to unsteady Navier-Stokes equation by Splitting method in physical orthogonal algebraic curvilinear coordinate system, also termed 'Non-linear grid system' is presented. The linear terms in Navier-Stokes equation are solved by Crank- Nicholson method while the non-linear term is solved by the second order Adams-Bashforth method. This work is meant to bring together the advantage of Splitting method as pressure-velocity solver of higher efficiency with the advantage of consuming Non-linear grid system which produce more accurate results in relatively equal number of grid points as compared to Cartesian grid. The validation of Splitting method as a solution of Navier-Stokes equation in Nonlinear grid system is done by comparison with the benchmark results for lid driven cavity flow by Ghia and some case studies including Backward Facing Step Flow Problem.
Keywords: Navier-Stokes, 'Non-linear grid system', Splitting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530784 Contour Estimation in Synthetic and Real Weld Defect Images based on Maximum Likelihood
Authors: M. Tridi, N. Nacereddine, N. Oucief
Abstract:
This paper describes a novel method for automatic estimation of the contours of weld defect in radiography images. Generally, the contour detection is the first operation which we apply in the visual recognition system. Our approach can be described as a region based maximum likelihood formulation of parametric deformable contours. This formulation provides robustness against the poor image quality, and allows simultaneous estimation of the contour parameters together with other parameters of the model. Implementation is performed by a deterministic iterative algorithm with minimal user intervention. Results testify for the very good performance of the approach especially in synthetic weld defect images.Keywords: Contour, gaussian, likelihood, rayleigh.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666783 A Model of Market Segmentation for the Customers of Mellat Bank in Iran
Authors: Nader Gharibnavaz, Hossein Yazdi
Abstract:
If organizations like Mellat Bank want to identify its customer market completely to reach its specified goals, it can segment the market to offer the product package to the right segment. Our objective is to offer a segmentation model for Iran banking market in Mellat bank view. The methodology of this project is combined by “segmentation on the basis of four part-quality variables" and “segmentation on the basis of different in means". Required data are gathered from E-Systems and researcher personal observation. Finally, the research offers the organization that at first step form a four dimensional matrix with 756 segments using four variables named value-based, behavioral, activity style, and activity level, and at the second step calculate the means of profit for every cell of matrix in two distinguished work level (levels α1:normal condition and α2: high pressure condition) and compare the segments by checking two conditions that are 1- homogeneity every segment with its sub segment and 2- heterogeneity with other segments, and so it can do the necessary segmentation process. After all, the last offer (more explained by an operational example and feedback algorithm) is to test and update the model because of dynamic environment, technology, and banking system.Keywords: market segmentation model, banking system, Mellat bank
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3291782 Grid Computing for the Bi-CGSTAB Applied to the Solution of the Modified Helmholtz Equation
Authors: E. N. Mathioudakis, E. P. Papadopoulou
Abstract:
The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.
Keywords: Collocation, Preconditioned Bi-CGSTAB, MPI, Grid and DSM Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687781 The Existence and Uniqueness of Positive Solution for Nonlinear Fractional Differential Equation Boundary Value Problem
Authors: Chuanyun Gu, Shouming Zhong
Abstract:
In this paper, the existence and uniqueness of positive solutions for nonlinear fractional differential equation boundary value problem is concerned by a fixed point theorem of a sum operator. Our results can not only guarantee the existence and uniqueness of positive solution, but also be applied to construct an iterative scheme for approximating it. Finally, the example is given to illustrate the main result.
Keywords: Fractional differential equation, Boundary value problem, Positive solution, Existence and uniqueness, Fixed point theorem of a sum operator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499