Search results for: Polynomial approximate inverse
240 SEM-EBSD Observation for Microtubes by Using Dieless Drawing Process
Authors: Takashi Sakai, Itaru Kumisawa
Abstract:
Because die drawing requires insertion of a die, a plug, or a mandrel, higher precision and efficiency are demanded for drawing equipment for a tube having smaller diameter. Manufacturing of such tubes is also accompanied by problems such as cracking and fracture. We specifically examine dieless drawing, which is less affected by these drawing-related difficulties. This deformation process is governed by a similar principle to that of reduction in diameter when pulling a heated glass tube. We conducted dieless drawing of SUS304 stainless steel microtubes under various conditions with three factor parameters of heating temperature, area reduction, and drawing speed. We used SEM-EBSD to observe the processing condition effects on microstructural elements. As the result of this study, crystallographic orientation of microtube is clear by using SEM-EBSD analysis.
Keywords: Microtube, dieless drawing, IPF, inverse pole figure, GOS, grain orientation spread, crystallographic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775239 Biometric Steganography Using Variable Length Embedding
Authors: Souvik Bhattacharyya, Indradip Banerjee, Anumoy Chakraborty, Gautam Sanyal
Abstract:
Recent growth in digital multimedia technologies has presented a lot of facilities in information transmission, reproduction and manipulation. Therefore, the concept of information security is one of the superior articles in the present day situation. The biometric information security is one of the information security mechanisms. It has the advantages as well as disadvantages. The biometric system is at risk to a range of attacks. These attacks are anticipated to bypass the security system or to suspend the normal functioning. Various hazards have been discovered while using biometric system. Proper use of steganography greatly reduces the risks in biometric systems from the hackers. Steganography is one of the fashionable information hiding technique. The goal of steganography is to hide information inside a cover medium like text, image, audio, video etc. through which it is not possible to detect the existence of the secret information. Here in this paper a new security concept has been established by making the system more secure with the help of steganography along with biometric security. Here the biometric information has been embedded to a skin tone portion of an image with the help of proposed steganographic technique.
Keywords: Biometrics, Skin tone detection, Series, Polynomial, Cover Image, Stego Image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674238 Simulation of Heat Transfer in the Multi-Layer Door of the Furnace
Authors: U. Prasopchingchana
Abstract:
The temperature distribution and the heat transfer rates through a multi-layer door of a furnace were investigated. The inside of the door was in contact with hot air and the other side of the door was in contact with room air. Radiation heat transfer from the walls of the furnace to the door and the door to the surrounding area was included in the problem. This work is a two dimensional steady state problem. The Churchill and Chu correlation was used to find local convection heat transfer coefficients at the surfaces of the furnace door. The thermophysical properties of air were the functions of the temperatures. Polynomial curve fitting for the fluid properties were carried out. Finite difference method was used to discretize for conduction heat transfer within the furnace door. The Gauss-Seidel Iteration was employed to compute the temperature distribution in the door. The temperature distribution in the horizontal mid plane of the furnace door in a two dimensional problem agrees with the one dimensional problem. The local convection heat transfer coefficients at the inside and outside surfaces of the furnace door are exhibited.Keywords: Conduction, heat transfer, multi-layer door, natural convection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100237 Quantum Computing: A New Era of Computing
Authors: Jyoti Chaturvedi Gursaran
Abstract:
Nature conducts its action in a very private manner. To reveal these actions classical science has done a great effort. But classical science can experiment only with the things that can be seen with eyes. Beyond the scope of classical science quantum science works very well. It is based on some postulates like qubit, superposition of two states, entanglement, measurement and evolution of states that are briefly described in the present paper. One of the applications of quantum computing i.e. implementation of a novel quantum evolutionary algorithm(QEA) to automate the time tabling problem of Dayalbagh Educational Institute (Deemed University) is also presented in this paper. Making a good timetable is a scheduling problem. It is NP-hard, multi-constrained, complex and a combinatorial optimization problem. The solution of this problem cannot be obtained in polynomial time. The QEA uses genetic operators on the Q-bit as well as updating operator of quantum gate which is introduced as a variation operator to converge toward better solutions.
Keywords: Quantum computing, qubit, superposition, entanglement, measurement of states, evolution of states, Scheduling problem, hard and soft constraints, evolutionary algorithm, quantum evolutionary algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2664236 Action Recognition in Video Sequences using a Mealy Machine
Authors: L. Rodriguez-Benitez, J. Moreno-Garcia, J.J. Castro-Schez, C. Solana, L. Jimenez
Abstract:
In this paper the use of sequential machines for recognizing actions taken by the objects detected by a general tracking algorithm is proposed. The system may deal with the uncertainty inherent in medium-level vision data. For this purpose, fuzzification of input data is performed. Besides, this transformation allows to manage data independently of the tracking application selected and enables adding characteristics of the analyzed scenario. The representation of actions by means of an automaton and the generation of the input symbols for finite automaton depending on the object and action compared are described. The output of the comparison process between an object and an action is a numerical value that represents the membership of the object to the action. This value is computed depending on how similar the object and the action are. The work concludes with the application of the proposed technique to identify the behavior of vehicles in road traffic scenes.
Keywords: Approximate reasoning, finite state machines, video analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689235 Application of Double Side Approach Method on Super Elliptical Winkler Plate
Authors: Hsiang-Wen Tang, Cheng-Ying Lo
Abstract:
In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.
Keywords: Super elliptical Winkler Plate, double side approach method, error bound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620234 2-DOF Observer Based Controller for First Order with Dead Time Systems
Authors: Ashu Ahuja, Shiv Narayan, Jagdish Kumar
Abstract:
This paper realized the 2-DOF controller structure for first order with time delay systems. The co-prime factorization is used to design observer based controller K(s), representing one degree of freedom. The problem is based on H∞ norm of mixed sensitivity and aims to achieve stability, robustness and disturbance rejection. Then, the other degree of freedom, prefilter F(s), is formulated as fixed structure polynomial controller to meet open loop processing of reference model. This model matching problem is solved by minimizing integral square error between reference model and proposed model. The feedback controller and prefilter designs are posed as optimization problem and solved using Particle Swarm Optimization (PSO). To show the efficiency of the designed approach different variety of processes are taken and compared for analysis.
Keywords: 2-DOF, integral square error, mixed sensitivity function, observer based controller, particle swarm optimization, prefilter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434233 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves
Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong
Abstract:
Newton-Lagrange Interpolations are widely used in numerical analysis. However, it requires a quadratic computational time for their constructions. In computer aided geometric design (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. Thus, the computational time for Newton-Lagrange Interpolations can be reduced by applying the algorithms of Wang-Ball, DP and Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong algorithms, first, it is necessary to convert Newton-Lagrange polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational time for representing Newton-Lagrange polynomials can be reduced into linear complexity. In addition, the other utilizations of using CAGD curves to modify the Newton-Lagrange curves can be taken.Keywords: Newton interpolation, Lagrange interpolation, linear complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 621232 Software Effort Estimation Using Soft Computing Techniques
Authors: Parvinder S. Sandhu, Porush Bassi, Amanpreet Singh Brar
Abstract:
Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Keywords: Effort Estimation, Neural-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2078231 Systholic Boolean Orthonormalizer Network in Wavelet Domain for Microarray Denoising
Authors: Mario Mastriani
Abstract:
We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on the following procedure: We apply 1) Bidimentional Discrete Wavelet Transform (DWT-2D) to the Noisy Microarray, 2) scaling and rounding to the coefficients of the highest subbands (to obtain integer and positive coefficients), 3) bit-slicing to the new highest subbands (to obtain bit-planes), 4) then we apply the Systholic Boolean Orthonormalizer Network (SBON) to the input bit-plane set and we obtain two orthonormal otput bit-plane sets (in a Boolean sense), we project a set on the other one, by means of an AND operation, and then, 5) we apply re-assembling, and, 6) rescaling. Finally, 7) we apply Inverse DWT-2D and reconstruct a microarray from the modified wavelet coefficients. Denoising results compare favorably to the most of methods in use at the moment.
Keywords: Bit-Plane, Boolean Orthonormalization Process, Denoising, Microarrays, Wavelets
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493230 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme
Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi
Abstract:
Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.
Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155229 Delivery of Positively Charged Proteins Using Hyaluronic Acid Microgels
Authors: Elaheh Jooybar, Mohammad J. Abdekhodaie, Marcel Karperien, Pieter J. Dijkstra
Abstract:
In this study, hyaluronic acid (HA) microgels were developed for the goal of protein delivery. First, a hyaluronic acid-tyramine conjugate (HA-TA) was synthesized with a degree of substitution of 13 TA moieties per 100 disaccharide units. Then, HA-TA microdroplets were produced using a water in oil emulsion method and crosslinked in the presence of horseradish peroxidase (HRP) and hydrogen peroxide (H2O2). Loading capacity and the release kinetics of lysozyme and BSA, as model proteins, were investigated. It was shown that lysozyme, a cationic protein, can be incorporated efficiently in the HA microgels, while the loading efficiency for BSA, as a negatively charged protein, is low. The release profile of lysozyme showed a sustained release over a period of one month. The results demonstrated that the HA-TA microgels are a good carrier for spatial delivery of cationic proteins for biomedical applications.
Keywords: Microgel, inverse emulsion, protein delivery, hyaluronic acid, crosslinking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829228 Non-Linear Vibration and Stability Analysis of an Axially Moving Beam with Rotating-Prismatic Joint
Authors: M. Najafi, F. Rahimi Dehgolan
Abstract:
In this paper, the dynamic modeling of a single-link flexible beam with a tip mass is given by using Hamilton's principle. The link has been rotational and translational motion and it was assumed that the beam is moving with a harmonic velocity about a constant mean velocity. Non-linearity has been introduced by including the non-linear strain to the analysis. Dynamic model is obtained by Euler-Bernoulli beam assumption and modal expansion method. Also, the effects of rotary inertia, axial force, and associated boundary conditions of the dynamic model were analyzed. Since the complex boundary value problem cannot be solved analytically, the multiple scale method is utilized to obtain an approximate solution. Finally, the effects of several conditions on the differences among the behavior of the non-linear term, mean velocity on natural frequencies and the system stability are discussed.
Keywords: Non-linear vibration, stability, axially moving beam, bifurcation, multiple scales method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345227 Mathematical Models for Overall Gas Transfer Coefficient Using Different Theories and Evaluating Their Measurement Accuracy
Authors: Shashank.B. Thakre, Lalit.B. Bhuyar, Samir.J. Deshmukh
Abstract:
Oxygen transfer, the process by which oxygen is transferred from the gaseous to liquid phase, is a vital part of the waste water treatment process. Because of low solubility of oxygen and consequent low rate of oxygen transfer, sufficient oxygen to meet the requirement of aerobic waste does not enter through normal surface air water interface. Many theories have come up in explaining the mechanism of gas transfer and absorption of non-reacting gases in a liquid, of out of which, Two film theory is important. An exiting mathematical model determines approximate value of Overall Gas Transfer coefficient. The Overall Gas Transfer coefficient, in case of Penetration theory, is 1.13 time more than that obtained in case of Two film theory. The difference is due to the difference in assumptions in the two theories. The paper aims at development of mathematical model which determines the value of Overall Gas Transfer coefficient with greater accuracy than the existing model.Keywords: Theories, Dissolved oxygen, Mathematical model, Gas Transfer coefficient, Accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564226 Complexity Analysis of Some Known Graph Coloring Instances
Authors: Jeffrey L. Duffany
Abstract:
Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.Keywords: graph coloring, complexity, algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404225 Feature Selection for Web Page Classification Using Swarm Optimization
Authors: B. Leela Devi, A. Sankar
Abstract:
The web’s increased popularity has included a huge amount of information, due to which automated web page classification systems are essential to improve search engines’ performance. Web pages have many features like HTML or XML tags, hyperlinks, URLs and text contents which can be considered during an automated classification process. It is known that Webpage classification is enhanced by hyperlinks as it reflects Web page linkages. The aim of this study is to reduce the number of features to be used to improve the accuracy of the classification of web pages. In this paper, a novel feature selection method using an improved Particle Swarm Optimization (PSO) using principle of evolution is proposed. The extracted features were tested on the WebKB dataset using a parallel Neural Network to reduce the computational cost.
Keywords: Web page classification, WebKB Dataset, Term Frequency-Inverse Document Frequency (TF-IDF), Particle Swarm Optimization (PSO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3264224 DHT-LMS Algorithm for Sensorineural Loss Patients
Authors: Sunitha S. L., V. Udayashankara
Abstract:
Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728223 Digital Image Encryption Scheme using Chaotic Sequences with a Nonlinear Function
Abstract:
In this study, a system of encryption based on chaotic sequences is described. The system is used for encrypting digital image data for the purpose of secure image transmission. An image secure communication scheme based on Logistic map chaotic sequences with a nonlinear function is proposed in this paper. Encryption and decryption keys are obtained by one-dimensional Logistic map that generates secret key for the input of the nonlinear function. Receiver can recover the information using the received signal and identical key sequences through the inverse system technique. The results of computer simulations indicate that the transmitted source image can be correctly and reliably recovered by using proposed scheme even under the noisy channel. The performance of the system will be discussed through evaluating the quality of recovered image with and without channel noise.Keywords: Digital image, Image encryption, Secure communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241222 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models
Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo
Abstract:
There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.
Keywords: Chlorodifluoromethane (HCFC-142b), ozone (O3), least squares method, regression models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831221 An Improved Ant Colony Algorithm for Genome Rearrangements
Authors: Essam Al Daoud
Abstract:
Genome rearrangement is an important area in computational biology and bioinformatics. The basic problem in genome rearrangements is to compute the edit distance, i.e., the minimum number of operations needed to transform one genome into another. Unfortunately, unsigned genome rearrangement problem is NP-hard. In this study an improved ant colony optimization algorithm to approximate the edit distance is proposed. The main idea is to convert the unsigned permutation to signed permutation and evaluate the ants by using Kaplan algorithm. Two new operations are added to the standard ant colony algorithm: Replacing the worst ants by re-sampling the ants from a new probability distribution and applying the crossover operations on the best ants. The proposed algorithm is tested and compared with the improved breakpoint reversal sort algorithm by using three datasets. The results indicate that the proposed algorithm achieves better accuracy ratio than the previous methods.
Keywords: Ant colony algorithm, Edit distance, Genome breakpoint, Genome rearrangement, Reversal sort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909220 Accuracy of Displacement Estimation and Selection of Capacitors for a Four Degrees of Freedom Capacitive Force Sensor
Authors: Chisato Murakami, Makoto Takahashi
Abstract:
Force sensor has been used as requisite for knowing information on the amount and the directions of forces on the skin surface. We have developed a four-degrees-of-freedom capacitive force sensor (approximately 20×20×5 mm3) that has a flexible structure and sixteen parallel plate capacitors. An iterative algorithm was developed for estimating four displacements from the sixteen capacitances using fourth-order polynomial approximation of characteristics between capacitance and displacement. The estimation results from measured capacitances had large error caused by deterioration of the characteristics. In this study, effective capacitors had major information were selected on the basis of the capacitance change range and the characteristic shape. Maximum errors in calibration and non-calibration points were 25%and 6.8%.However the maximum error was larger than desired value, the smallness of averaged value indicated the occurrence of a few large error points. On the other hand, error in non-calibration point was within desired value.
Keywords: Force sensors, capacitive sensors, estimation, iterative algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621219 Acoustic and Flow Field Analysis of a Perforated Muffler Design
Authors: Zeynep Parlar, Şengül Ari, Rıfat Yilmaz, Erdem Özdemir, Arda Kahraman
Abstract:
New regulations and standards for noise emission increasingly compel the automotive firms to make some improvements about decreasing the engine noise. Nowadays, the perforated reactive mufflers which have an effective damping capability are specifically used for this purpose. New designs should be analyzed with respect to both acoustics and back pressure. In this study, a reactive perforated muffler is investigated numerically and experimentally. For an acoustical analysis, the transmission loss which is independent of sound source of the present cross flow, the perforated muffler was analyzed by COMSOL. To be able to validate the numerical results, transmission loss was measured experimentally. Back pressure was obtained based on the flow field analysis and was also compared with experimental results. Numerical results have an approximate error of 20% compared to experimental results.
Keywords: Back Pressure, Perforated Muffler, Transmission Loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8332218 Target Signal Detection Using MUSIC Spectrum in Noise Environment
Authors: Sangjun Park, Sangbae Jeong, Moonsung Han, Minsoo hahn
Abstract:
In this paper, a target signal detection method using multiple signal classification (MUSIC) algorithm is proposed. The MUSIC algorithm is a subspace-based direction of arrival (DOA) estimation method. The algorithm detects the DOAs of multiple sources using the inverse of the eigenvalue-weighted eigen spectra. To apply the algorithm to target signal detection for GSC-based beamforming, we utilize its spectral response for the target DOA in noisy conditions. For evaluation of the algorithm, the performance of the proposed target signal detection method is compared with that of the normalized cross-correlation (NCC), the fixed beamforming, and the power ratio method. Experimental results show that the proposed algorithm significantly outperforms the conventional ones in receiver operating characteristics(ROC) curves.Keywords: Beamforming, direction of arrival, multiple signal classification, target signal detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2543217 Hydrolysis of Hull-Less Pumpkin Oil Cake Protein Isolate by Pepsin
Authors: Ivan Živanović, Žužana Vaštag, Senka Popović, Ljiljana Popović, Draginja Peričin
Abstract:
The present work represents an investigation of the hydrolysis of hull-less pumpkin (Cucurbita Pepo L.) oil cake protein isolate (PuOC PI) by pepsin. To examine the effectiveness and suitability of pepsin towards PuOC PI the kinetic parameters for pepsin on PuOC PI were determined and then, the hydrolysis process was studied using Response Surface Methodology (RSM). The hydrolysis was carried out at temperature of 30°C and pH 3.00. Time and initial enzyme/substrate ratio (E/S) at three levels were selected as the independent parameters. The degree of hydrolysis, DH, was mesuared after 20, 30 and 40 minutes, at initial E/S of 0.7, 1 and 1.3 mA/mg proteins. Since the proposed second-order polynomial model showed good fit with the experimental data (R2 = 0.9822), the obtained mathematical model could be used for monitoring the hydrolysis of PuOC PI by pepsin, under studied experimental conditions, varying the time and initial E/S. To achieve the highest value of DH (39.13 %), the obtained optimum conditions for time and initial E/S were 30 min and 1.024 mA/mg proteins.Keywords: Enzymatic hydrolysis, Pepsin, Pumpkin (CucurbitaPepo L.) oil cake protein isolate, Response surface methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181216 Coupling Compensation of 6-DOF Parallel Robot Based on Screw Theory
Authors: Ming Cong, Yinghua Wu, Dong Liu, Haiying Wen, Junfa Yu
Abstract:
In order to improve control performance and eliminate steady, a coupling compensation for 6-DOF parallel robot is presented. Taking dynamic load Tank Simulator as the research object, this paper analyzes the coupling of 6-DOC parallel robot considering the degree of freedom of the 6-DOF parallel manipulator. The coupling angle and coupling velocity are derived based on inverse kinematics model. It uses the mechanism-model combined method which takes practical moving track that considering the performance of motion controller and motor as its input to make the study. Experimental results show that the coupling compensation improves motion stability as well as accuracy. Besides, it decreases the dither amplitude of dynamic load Tank Simulator.
Keywords: coupling compensation, screw theory, parallel robot, mechanism-model combined motion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684215 A Comparison Study of a Symmetry Solution of Magneto-Elastico-Viscous Fluid along a Semi- Infinite Plate with Homotopy Perturbation Method and4th Order Runge–Kutta Method
Authors: Mohamed M. Mousa, Aidarkhan Kaltayev
Abstract:
The equations governing the flow of an electrically conducting, incompressible viscous fluid over an infinite flat plate in the presence of a magnetic field are investigated using the homotopy perturbation method (HPM) with Padé approximants (PA) and 4th order Runge–Kutta method (4RKM). Approximate analytical and numerical solutions for the velocity field and heat transfer are obtained and compared with each other, showing excellent agreement. The effects of the magnetic parameter and Prandtl number on velocity field, shear stress, temperature and heat transfer are discussed as well.
Keywords: Electrically conducting elastico-viscous fluid, symmetry solution, Homotopy perturbation method, Padé approximation, 4th order Runge–Kutta, Maple
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475214 An Adaptive Mammographic Image Enhancement in Orthogonal Polynomials Domain
Authors: R. Krishnamoorthy, N. Amudhavalli, M.K. Sivakkolunthu
Abstract:
X-ray mammography is the most effective method for the early detection of breast diseases. However, the typical diagnostic signs such as microcalcifications and masses are difficult to detect because mammograms are of low-contrast and noisy. In this paper, a new algorithm for image denoising and enhancement in Orthogonal Polynomials Transformation (OPT) is proposed for radiologists to screen mammograms. In this method, a set of OPT edge coefficients are scaled to a new set by a scale factor called OPT scale factor. The new set of coefficients is then inverse transformed resulting in contrast improved image. Applications of the proposed method to mammograms with subtle lesions are shown. To validate the effectiveness of the proposed method, we compare the results to those obtained by the Histogram Equalization (HE) and the Unsharp Masking (UM) methods. Our preliminary results strongly suggest that the proposed method offers considerably improved enhancement capability over the HE and UM methods.Keywords: mammograms, image enhancement, orthogonalpolynomials, contrast improvement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014213 Despiking of Turbulent Flow Data in Gravel Bed Stream
Authors: Ratul Das
Abstract:
The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.
Keywords: Acoustic Doppler Velocimeter, gravel-bed, spike removal, Reynolds shear stress, near-bed turbulence, velocity power spectra.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183212 Performance Analysis of the Time-Based and Periodogram-Based Energy Detector for Spectrum Sensing
Authors: Sadaf Nawaz, Adnan Ahmed Khan, Asad Mahmood, Chaudhary Farrukh Javed
Abstract:
Classically, an energy detector is implemented in time domain (TD). However, frequency domain (FD) based energy detector has demonstrated an improved performance. This paper presents a comparison between the two approaches as to analyze their pros and cons. A detailed performance analysis of the classical TD energy-detector and the periodogram based detector is performed. Exact and approximate mathematical expressions for probability of false alarm (Pf) and probability of detection (Pd) are derived for both approaches. The derived expressions naturally lead to an analytical as well as intuitive reasoning for the improved performance of (Pf) and (Pd) in different scenarios. Our analysis suggests the dependence improvement on buffer sizes. Pf is improved in FD, whereas Pd is enhanced in TD based energy detectors. Finally, Monte Carlo simulations results demonstrate the analysis reached by the derived expressions.
Keywords: Cognitive radio, energy detector, periodogram, spectrum sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039211 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.
Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963