Search results for: symmetric arrow-head matrix
726 Physico-Mechanical Properties of Chemically Modified Sisal Fibre Reinforced Unsaturated Polyester Composites
Authors: A. A. Salisu, M. Y. Yakasai, K. M. Aujara
Abstract:
Sisal leaves were subjected to enzymatic retting method to extract the sisal fibre. A portion of the fibre was pretreated with alkali (NaOH), and further treated with benzoyl chloride and silane treatment reagents. Both the treated and untreated Sisal fibre composites were used to fabricate the composite by hand lay-up technique using unsaturated polyester resin. Tensile, flexural, water absorption, density, thickness swelling and chemical resistant tests were conducted and evaluated on the composites. Results obtained for all the parameters showed an increase in the treated fibre compared to untreated fibre. FT-IR spectra results ascertained the inclusion of benzoyl and silane groups on the fibre surface. Scanning electron microscopy (SEM) result obtained showed variation in the morphology of the treated and untreated fibre. Chemical modification was found to improve adhesion of the fibre to the matrix, as well as physico-mechanical properties of the composites.Keywords: Chemical resistance, density test, Sisal fibre, polymer matrix, thickness swelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012725 On Symmetries and Exact Solutions of Einstein Vacuum Equations for Axially Symmetric Gravitational Fields
Authors: Nisha Goyal, R.K. Gupta
Abstract:
Einstein vacuum equations, that is a system of nonlinear partial differential equations (PDEs) are derived from Weyl metric by using relation between Einstein tensor and metric tensor. The symmetries of Einstein vacuum equations for static axisymmetric gravitational fields are obtained using the Lie classical method. We have examined the optimal system of vector fields which is further used to reduce nonlinear PDE to nonlinear ordinary differential equation (ODE). Some exact solutions of Einstein vacuum equations in general relativity are also obtained.Keywords: Gravitational fields, Lie Classical method, Exact solutions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934724 FPGA Implementation of the “PYRAMIDS“ Block Cipher
Authors: A. AlKalbany, H. Al hassan, M. Saeb
Abstract:
The “PYRAMIDS" Block Cipher is a symmetric encryption algorithm of a 64, 128, 256-bit length, that accepts a variable key length of 128, 192, 256 bits. The algorithm is an iterated cipher consisting of repeated applications of a simple round transformation with different operations and different sequence in each round. The algorithm was previously software implemented in Cµ code. In this paper, a hardware implementation of the algorithm, using Field Programmable Gate Arrays (FPGA), is presented. In this work, we discuss the algorithm, the implemented micro-architecture, and the simulation and implementation results. Moreover, we present a detailed comparison with other implemented standard algorithms. In addition, we include the floor plan as well as the circuit diagrams of the various micro-architecture modules.
Keywords: FPGA, VHDL, micro-architecture, encryption, cryptography, algorithm, data communication security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704723 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584722 Finite-Horizon Tracking Control for Repetitive Systems with Uncertain Initial Conditions
Authors: Sung Wook Yun, Yun Jong Choi, Kyong-min Lee, Poogyeon Park*
Abstract:
Repetitive systems stand for a kind of systems that perform a simple task on a fixed pattern repetitively, which are widely spread in industrial fields. Hence, many researchers have been interested in those systems, especially in the field of iterative learning control (ILC). In this paper, we propose a finite-horizon tracking control scheme for linear time-varying repetitive systems with uncertain initial conditions. The scheme is derived both analytically and numerically for state-feedback systems and only numerically for output-feedback systems. Then, it is extended to stable systems with input constraints. All numerical schemes are developed in the forms of linear matrix inequalities (LMIs). A distinguished feature of the proposed scheme from the existing iterative learning control is that the scheme guarantees the tracking performance exactly even under uncertain initial conditions. The simulation results demonstrate the good performance of the proposed scheme.Keywords: Finite time horizon, linear matrix inequality (LMI), repetitive system, uncertain initial condition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893721 Development and Characterization of a Polymer Composite Electrolyte to Be Used in Proton Exchange Membranes Fuel Cells
Authors: B. A. Berns, V. Romanovicz, M. M. de Camargo Forte, D. E. O. S. Carpenter
Abstract:
The Proton Exchange Membranes (PEM) are largely studied because they operate at low temperatures and they are suitable for mobile applications. However, there are some deficiencies in their operation, mainly those that use ethanol as a hydrogen source, that require a certain attention. Therefore, this research aimed to develop Nafion® composite membranes, mixing clay minerals, kaolin and halloysite to the polymer matrix in order to improve the ethanol molecule retentions and, at the same time, to keep the system’s protonic conductivity. The modified Nafion/Kaolin, Nafion/Halloysite composite membranes were prepared in weight proportion of 0.5, 1.0 and 1.5. The membranes obtained were characterized as to their ethanol permeability, protonic conductivity and water absorption. The composite morphology and structure are characterized by SEM and EDX and the thermal behavior is determined by TGA and DSC. The analysis of the results shows ethanol permeability reduction from 48% to 63%. However, the protonic conductivity results are lower in relation to pure Nafion®. As to the thermal behavior, the Nafion® composite membranes were stable up to a temperature of 325ºC.
Keywords: Polymer-matrix composites (PMCs), Thermal properties, Nanoclay, Differential scanning calorimetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2552720 Micromechanics of Stress Transfer across the Interface Fiber-Matrix Bonding
Authors: Fatiha Teklal, Bachir Kacimi, Arezki Djebbar
Abstract:
The study and application of composite materials are a truly interdisciplinary endeavor that has been enriched by contributions from chemistry, physics, materials science, mechanics and manufacturing engineering. The understanding of the interface (or interphase) in composites is the central point of this interdisciplinary effort. From the early development of composite materials of various nature, the optimization of the interface has been of major importance. Even more important, the ideas linking the properties of composites to the interface structure are still emerging. In our study, we need a direct characterization of the interface; the micromechanical tests we are addressing seem to meet this objective and we chose to use two complementary tests simultaneously. The microindentation test that can be applied to real composites and the drop test, preferred to the pull-out because of the theoretical possibility of studying systems with high adhesion (which is a priori the case with our systems). These two tests are complementary because of the principle of the model specimen used for both the first "compression indentation" and the second whose fiber is subjected to tensile stress called the drop test. Comparing the results obtained by the two methods can therefore be rewarding.Keywords: Interface, micromechanics, pull-out, composite, fiber, matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608719 A Hamiltonian Decomposition of 5-star
Authors: Walter Hussak, Heiko Schröder
Abstract:
Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.
Keywords: interconnection networks, paths and cycles, graphs andgroups.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746718 Adaptive Kernel Principal Analysis for Online Feature Extraction
Authors: Mingtao Ding, Zheng Tian, Haixia Xu
Abstract:
The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552717 A Symbol by Symbol Clustering Based Blind Equalizer
Authors: Kristina Georgoulakis
Abstract:
A new blind symbol by symbol equalizer is proposed. The operation of the proposed equalizer is based on the geometric properties of the two dimensional data constellation. An unsupervised clustering technique is used to locate the clusters formed by the received data. The symmetric properties of the clusters labels are subsequently utilized in order to label the clusters. Following this step, the received data are compared to clusters and decisions are made on a symbol by symbol basis, by assigning to each data the label of the nearest cluster. The operation of the equalizer is investigated both in linear and nonlinear channels. The performance of the proposed equalizer is compared to the performance of a CMAbased blind equalizer.Keywords: Blind equalization, channel equalization, cluster based equalisers
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435716 Robust Detection of R-Wave Using Wavelet Technique
Authors: Awadhesh Pachauri, Manabendra Bhuyan
Abstract:
Electrocardiogram (ECG) is considered to be the backbone of cardiology. ECG is composed of P, QRS & T waves and information related to cardiac diseases can be extracted from the intervals and amplitudes of these waves. The first step in extracting ECG features starts from the accurate detection of R peaks in the QRS complex. We have developed a robust R wave detector using wavelets. The wavelets used for detection are Daubechies and Symmetric. The method does not require any preprocessing therefore, only needs the ECG correct recordings while implementing the detection. The database has been collected from MIT-BIH arrhythmia database and the signals from Lead-II have been analyzed. MatLab 7.0 has been used to develop the algorithm. The ECG signal under test has been decomposed to the required level using the selected wavelet and the selection of detail coefficient d4 has been done based on energy, frequency and cross-correlation analysis of decomposition structure of ECG signal. The robustness of the method is apparent from the obtained results.Keywords: ECG, P-QRS-T waves, Wavelet Transform, Hard Thresholding, R-wave Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474715 Machine Learning Approach for Identifying Dementia from MRI Images
Authors: S. K. Aruna, S. Chitra
Abstract:
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115714 Comparative Study on Recent Integer DCTs
Authors: Sakol Udomsiri, Masahiro Iwahashi
Abstract:
This paper presents comparative study on recent integer DCTs and a new method to construct a low sensitive structure of integer DCT for colored input signals. The method refers to sensitivity of multiplier coefficients to finite word length as an indicator of how word length truncation effects on quality of output signal. The sensitivity is also theoretically evaluated as a function of auto-correlation and covariance matrix of input signal. The structure of integer DCT algorithm is optimized by combination of lower sensitive lifting structure types of IRT. It is evaluated by the sensitivity of multiplier coefficients to finite word length expression in a function of covariance matrix of input signal. Effectiveness of the optimum combination of IRT in integer DCT algorithm is confirmed by quality improvement comparing with existing case. As a result, the optimum combination of IRT in each integer DCT algorithm evidently improves output signal quality and it is still compatible with the existing one.Keywords: DCT, sensitivity, lossless, wordlength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381713 Promising Immobilization of Cadmium and Lead inside Ca-rich Glass-ceramics
Authors: A. Karnis, L. Gautron
Abstract:
Considering toxicity of heavy metals and their accumulation in domestic wastes, immobilization of lead and cadmium is envisaged inside glass-ceramics. We particularly focused this work on calcium-rich phases embedded in a glassy matrix. Glass-ceramics were synthesized from glasses doped with 12 wt% and 16 wt% of PbO or CdO. They were observed and analyzed by Electron MicroProbe Analysis (EMPA) and Analytical Scanning Electron Microscopy (ASEM). Structural characterization of the samples was performed by powder XRay Diffraction. Diopside crystals of CaMgSi2O6 composition are shown to incorporate significant amounts of cadmium (up to 9 wt% of CdO). Two new crystalline phases are observed with very high Cd or Pb contents: about 40 wt% CdO for the cadmiumrich phase and near 60 wt% PbO for the lead-rich phase. We present complete chemical and structural characterization of these phases. They represent a promising way for the immobilization of toxic elements like Cd or Pb since glass ceramics are known to propose a “double barrier" protection (metal-rich crystals embedded in a glass matrix) against metal release in the environment.Keywords: Cadmium, Calcium-rich phases, Diopside, Domesticwastes, Fly ashes, Glass-ceramics, Lead, Municipal Solid WasteIncineration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656712 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909711 Analysis of Distribution of Thrust, Torque and Efficiency of a Constant Chord, Constant Pitch C.R.P. Fan by H.E.S. Method
Authors: Morteza Abbaszadeh, Parvin Nikpoorparizi, Mina Shahrooz
Abstract:
For the first time since 1940 and presentation of theodorson-s theory, distribution of thrust, torque and efficiency along the blade of a counter rotating propeller axial fan was studied with a novel method in this research. A constant chord, constant pitch symmetric fan was investigated with Reynolds Stress Turbulence method in this project and H.E.S. method was utilized to obtain distribution profiles from C.F.D. tests outcome. C.F.D. test results were validated by estimation from Playlic-s analytical method. Final results proved ability of H.E.S. method to obtain distribution profiles from C.F.D test results and demonstrated interesting facts about effects of solidity and differences between distributions in front and rear section.Keywords: C.F.D Test, Counter Rotating Propeller, H.E.S. Method, R.S.M. Method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3023710 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.
Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 247709 Study Interaction between Tin Dioxide Nanowhiskers and Ethanol Molecules in Gas Phase: Monte Carlo(MC) and Langevin Dynamics (LD) Simulation
Authors: L. Mahdavian, M. Raouf
Abstract:
Three dimensional nanostructure materials have attracted the attention of many researches because the possibility to apply them for near future devices in sensors, catalysis and energy related. Tin dioxide is the most used material for gas sensing because its three-dimensional nanostructures and properties are related to the large surface exposed to gas adsorption. We propose the use of branch SnO2 nanowhiskers in interaction with ethanol. All Sn atoms are symmetric. The total energy, potential energy and Kinetic energy calculated for interaction between SnO2 and ethanol in different distances and temperatures. The calculations achieved by methods of Langevin Dynamic and Mont Carlo simulation. The total energy increased with addition ethanol molecules and temperature so interactions between them are endothermic.
Keywords: Tin dioxide, nanowhisker, Ethanol, Langevin Dynamic and Mont Carlo Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1170708 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely, where the subspace of the covariance matrix is decomposed to separate the signal subspace from noise subspace. The decomposition is normally done by using either the eigenvalue decomposition (EVD) or the singular value decomposition (SVD) of the auto-correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. This paper considers the estimation of the multipath slow frequency hopping (FH) channel using noise space based method. In particular, an efficient method is proposed to estimate the multipath time delays by applying multiple signal classification (MUSIC) algorithm which is based on the null space extracted by the rank revealing LU (RRLU) factorization. As a result, precise information is provided by the RRLU about the numerical null space and the rank, (i.e., important tool in linear algebra). The simulation results demonstrate the effectiveness of the proposed novel method by approximately decreasing the computational complexity to the half as compared with RRQR methods keeping the same performance.
Keywords: Time Delay Estimation, RRLU, RRQR, MUSIC, LS-ESPRIT, LS-ESPRIT, Frequency Hopping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044707 Dimension Free Rigid Point Set Registration in Linear Time
Authors: Jianqin Qu
Abstract:
This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.Keywords: Covariant point, point matching, dimension free, rigid registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682706 Interference Reduction Technique in Multistage Multiuser Detector for DS-CDMA System
Authors: Lokesh Tharani, R.P.Yadav
Abstract:
This paper presents the results related to the interference reduction technique in multistage multiuser detector for asynchronous DS-CDMA system. To meet the real-time requirements for asynchronous multiuser detection, a bit streaming, cascade architecture is used. An asynchronous multiuser detection involves block-based computations and matrix inversions. The paper covers iterative-based suboptimal schemes that have been studied to decrease the computational complexity, eliminate the need for matrix inversions, decreases the execution time, reduces the memory requirements and uses joint estimation and detection process that gives better performance than the independent parameter estimation method. The stages of the iteration use cascaded and bits processed in a streaming fashion. The simulation has been carried out for asynchronous DS-CDMA system by varying one parameter, i.e., number of users. The simulation result exhibits that system gives optimum bit error rate (BER) at 3rd stage for 15-users.Keywords: Multi-user detection (MUD), multiple accessinterference (MAI), near-far effect, decision feedback detector, successive interference cancellation detector (SIC) and parallelinterference cancellation (PIC) detector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762705 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677704 Comparative Analysis of Vibration between Laminated Composite Plates with and without Holes under Compressive Loads
Authors: Bahi-Eddine Lahouel, Mohamed Guenfoud
Abstract:
In this study, a vibration analysis was carried out of symmetric angle-ply laminated composite plates with and without square hole when subjected to compressive loads, numerically. A buckling analysis is also performed to determine the buckling load of laminated plates. For each fibre orientation, the compression load is taken equal to 50% of the corresponding buckling load. In the analysis, finite element method (FEM) was applied to perform parametric studies, the effects of degree of orthotropy and stacking sequence upon the fundamental frequencies and buckling loads are discussed. The results show that the presence of a constant compressive load tends to reduce uniformly the natural frequencies for materials which have a low degree of orthotropy. However, this reduction becomes non-uniform for materials with a higher degree of orthotropy.Keywords: Vibration, Buckling, Cutout, Laminated composite, FEM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054703 Measurement of Steady Streaming from an Oscillating Bubble Using Particle Image Velocimetry
Authors: Yongseok Kwon, Woowon Jeong, Eunjin Cho, Sangkug Chung, Kyehan Rhee
Abstract:
Steady streaming flow fields induced by a 500 mm bubble oscillating at 12 kHz were measured using microscopic particle image velocimetry (PIV). The accuracy of velocity measurement using a micro PIV system was checked by comparing the measured velocity fields with the theoretical velocity profiles in fully developed laminar flow. The steady streaming flow velocities were measured in the sagittal plane of the bubble attached on the wall. Measured velocity fields showed upward jet flow with two symmetric counter-rotating vortices, and the maximum streaming velocity was about 12 mm/s, which was within the velocity ranges measured by other researchers. The measured streamlines were compared with the analytical solution, and they also showed a reasonable agreement.
Keywords: Oscillating bubble, Particle-Image-Velocimetry microstreaming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818702 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network
Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza
Abstract:
The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.
Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535701 A Model of Market Segmentation for the Customers of Mellat Bank in Iran
Authors: Nader Gharibnavaz, Hossein Yazdi
Abstract:
If organizations like Mellat Bank want to identify its customer market completely to reach its specified goals, it can segment the market to offer the product package to the right segment. Our objective is to offer a segmentation model for Iran banking market in Mellat bank view. The methodology of this project is combined by “segmentation on the basis of four part-quality variables" and “segmentation on the basis of different in means". Required data are gathered from E-Systems and researcher personal observation. Finally, the research offers the organization that at first step form a four dimensional matrix with 756 segments using four variables named value-based, behavioral, activity style, and activity level, and at the second step calculate the means of profit for every cell of matrix in two distinguished work level (levels α1:normal condition and α2: high pressure condition) and compare the segments by checking two conditions that are 1- homogeneity every segment with its sub segment and 2- heterogeneity with other segments, and so it can do the necessary segmentation process. After all, the last offer (more explained by an operational example and feedback algorithm) is to test and update the model because of dynamic environment, technology, and banking system.Keywords: market segmentation model, banking system, Mellat bank
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3287700 Securing Message in Wireless Sensor Network by using New Method of Code Conversions
Authors: Ahmed Chalak Shakir, GuXuemai, Jia Min
Abstract:
Recently, wireless sensor networks have been paid more interest, are widely used in a lot of commercial and military applications, and may be deployed in critical scenarios (e.g. when a malfunctioning network results in danger to human life or great financial loss). Such networks must be protected against human intrusion by using the secret keys to encrypt the exchange messages between communicating nodes. Both the symmetric and asymmetric methods have their own drawbacks for use in key management. Thus, we avoid the weakness of these two cryptosystems and make use of their advantages to establish a secure environment by developing the new method for encryption depending on the idea of code conversion. The code conversion-s equations are used as the key for designing the proposed system based on the basics of logic gate-s principals. Using our security architecture, we show how to reduce significant attacks on wireless sensor networks.Keywords: logic gates, code conversions, Gray-code, and clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655699 Economic Dispatch Fuzzy Linear Regression and Optimization
Authors: A. K. Al-Othman
Abstract:
This study presents a new approach based on Tanaka's fuzzy linear regression (FLP) algorithm to solve well-known power system economic load dispatch problem (ELD). Tanaka's fuzzy linear regression (FLP) formulation will be employed to compute the optimal solution of optimization problem after linearization. The unknowns are expressed as fuzzy numbers with a triangular membership function that has middle and spread value reflected on the unknowns. The proposed fuzzy model is formulated as a linear optimization problem, where the objective is to minimize the sum of the spread of the unknowns, subject to double inequality constraints. Linear programming technique is employed to obtain the middle and the symmetric spread for every unknown (power generation level). Simulation results of the proposed approach will be compared with those reported in literature.Keywords: Economic Dispatch, Fuzzy Linear Regression (FLP)and Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293698 Numerical Study of Heat Release of the Symmetrically Arranged Extruded-Type Heat Sinks
Authors: Man Young Kim, Gyo Woo Lee
Abstract:
In this numerical study, we want to present the design of highly efficient extruded-type heat sink. The symmetrically arranged extruded-type heat sinks are used instead of a single extruded or swaged-type heat sink. In this parametric study, the maximum temperatures, the base temperatures between heaters, and the heat release rates were investigated with respect to the arrangements of heat sources, air flow rates, and amounts of heat input. Based on the results we believe that the use of both side of heat sink is to be much better for release the heat than the use of single side. Also from the results, it is believed that the symmetric arrangement of heat sources is recommended to achieve a higher heat transfer from the heat sink.
Keywords: Heat Sink, Forced Convection, Heat Transfer, Performance Evaluation, Symmetrically Arranged.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634697 Boundary-Element-Based Finite Element Methods for Helmholtz and Maxwell Equations on General Polyhedral Meshes
Authors: Dylan M. Copeland
Abstract:
We present new finite element methods for Helmholtz and Maxwell equations on general three-dimensional polyhedral meshes, based on domain decomposition with boundary elements on the surfaces of the polyhedral volume elements. The methods use the lowest-order polynomial spaces and produce sparse, symmetric linear systems despite the use of boundary elements. Moreover, piecewise constant coefficients are admissible. The resulting approximation on the element surfaces can be extended throughout the domain via representation formulas. Numerical experiments confirm that the convergence behavior on tetrahedral meshes is comparable to that of standard finite element methods, and equally good performance is attained on more general meshes.
Keywords: Boundary elements, finite elements, Helmholtz equation, Maxwell equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725