Search results for: Near-field approximation
40 Towards Growing Self-Organizing Neural Networks with Fixed Dimensionality
Authors: Guojian Cheng, Tianshi Liu, Jiaxin Han, Zheng Wang
Abstract:
The competitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input pattern clusters. The basic idea behind the Kohonen-s Self-Organizing Feature Maps (SOFM) is competitive learning. SOFM can generate mappings from high-dimensional signal spaces to lower dimensional topological structures. The main features of this kind of mappings are topology preserving, feature mappings and probability distribution approximation of input patterns. To overcome some limitations of SOFM, e.g., a fixed number of neural units and a topology of fixed dimensionality, Growing Self-Organizing Neural Network (GSONN) can be used. GSONN can change its topological structure during learning. It grows by learning and shrinks by forgetting. To speed up the training and convergence, a new variant of GSONN, twin growing cell structures (TGCS) is presented here. This paper first gives an introduction to competitive learning, SOFM and its variants. Then, we discuss some GSONN with fixed dimensionality, which include growing cell structures, its variants and the author-s model: TGCS. It is ended with some testing results comparison and conclusions.Keywords: Artificial neural networks, Competitive learning, Growing cell structures, Self-organizing feature maps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154239 Analysis of Combined Heat Transfer through the Core Materials of VIPs with Various Scattering Properties
Authors: Jaehyug Lee, Tae-Ho Song
Abstract:
Vacuum Insulation Panel (VIP) can achieve very low thermal conductivity by evacuating its inner space. Heat transfer in the core materials of highly-evacuated VIP occurs by conduction through the solid structure and radiation through the pore. The effect of various scattering modes in combined conduction-radiation in VIP is investigated through numerical analysis. The discrete ordinates interpolation method (DOIM) incorporated with the commercial code FLUENT® is employed. It is found that backward scattering is more effective in reducing the total heat transfer while isotropic scattering is almost identical with pure absorbing/emitting case of the same optical thickness. For a purely scattering medium, the results agrees well with additive solution with diffusion approximation, while a modified term is added in the effect of optical thickness to backward scattering is employed. For other scattering phase functions, it is also confirmed that backwardly scattering phase function gives a lower effective thermal conductivity. Thus the materials with backward scattering properties, with radiation shields are desirable to lower the thermal conductivity of VIPs.
Keywords: Combined conduction and radiation, discrete ordinates interpolation method, scattering phase function, vacuum insulation panel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 305438 Pattern Matching Based on Regular Tree Grammars
Authors: Riad S. Jabri
Abstract:
Pattern matching based on regular tree grammars have been widely used in many areas of computer science. In this paper, we propose a pattern matcher within the framework of code generation, based on a generic and a formalized approach. According to this approach, parsers for regular tree grammars are adapted to a general pattern matching solution, rather than adapting the pattern matching according to their parsing behavior. Hence, we first formalize the construction of the pattern matches respective to input trees drawn from a regular tree grammar in a form of the so-called match trees. Then, we adopt a recently developed generic parser and tightly couple its parsing behavior with such construction. In addition to its generality, the resulting pattern matcher is characterized by its soundness and efficient implementation. This is demonstrated by the proposed theory and by the derived algorithms for its implementation. A comparison with similar and well-known approaches, such as the ones based on tree automata and LR parsers, has shown that our pattern matcher can be applied to a broader class of grammars, and achieves better approximation of pattern matches in one pass. Furthermore, its use as a machine code selector is characterized by a minimized overhead, due to the balanced distribution of the cost computations into static ones, during parser generation time, and into dynamic ones, during parsing time.
Keywords: Bottom-up automata, Code selection, Pattern matching, Regular tree grammars, Match trees.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 126937 Exploration of the Communication Area of Infrared Short-Range Communication Systems for Intervehicle Communication
Authors: Wern-Yarng Shieh, Hsin-Chuan Chen, Ti-Ho Wang, Bo-Wei Chen
Abstract:
Infrared communication in the wavelength band 780- 950 nm is very suitable for short-range point-to-point communications. It is a good choice for vehicle-to-vehicle communication in several intelligent-transportation-system (ITS) applications such as cooperative driving, collision warning, and pileup-crash prevention. In this paper, with the aid of a physical model established in our previous works, we explore the communication area of an infrared intervehicle communication system utilizing a typical low-cost cormmercial lightemitting diodes (LEDs) as the emitter and planar p-i-n photodiodes as the receiver. The radiation pattern of the emitter fabricated by aforementioned LEDs and the receiving pattern of the receiver are approximated by a linear combination of cosinen functions. This approximation helps us analyze the system performance easily. Both multilane straight-road conditions and curved-road conditions with various radius of curvature are taken into account. The condition of a small car communicating with a big truck, i.e., there is a vertical mounting height difference between the emitter and the receiver, is also considered. Our results show that the performance of the system meets the requirement of aforementioned ITS applications in terms of the communication area.
Keywords: Dedicated short-range communication (DSRC), infrared communication, intervehicle communication, intelligent transportation system (ITS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165536 Three-Dimensional Simulation of Free Electron Laser with Prebunching and Efficiency Enhancement
Authors: M. Chitsazi, B. Maraghechi, M. H. Rouhani
Abstract:
Three-dimensional simulation of harmonic up generation in free electron laser amplifier operating simultaneously with a cold and relativistic electron beam is presented in steady-state regime where the slippage of the electromagnetic wave with respect to the electron beam is ignored. By using slowly varying envelope approximation and applying the source-dependent expansion to wave equations, electromagnetic fields are represented in terms of the Hermit Gaussian modes which are well suited for the planar wiggler configuration. The electron dynamics is described by the fully threedimensional Lorentz force equation in presence of the realistic planar magnetostatic wiggler and electromagnetic fields. A set of coupled nonlinear first-order differential equations is derived and solved numerically. The fundamental and third harmonic radiation of the beam is considered. In addition to uniform beam, prebunched electron beam has also been studied. For this effect of sinusoidal distribution of entry times for the electron beam on the evolution of radiation is compared with uniform distribution. It is shown that prebunching reduces the saturation length substantially. For efficiency enhancement the wiggler is set to decrease linearly when the radiation of the third harmonic saturates. The optimum starting point of tapering and the slope of radiation in the amplitude of wiggler are found by successive run of the code.Keywords: Free electron laser, Prebunching, Undulator, Wiggler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146335 Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity
Authors: Phool Singh, Ashok Jangid, N. S. Tomer, Deepa Sinha
Abstract:
The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers.
Keywords: Magneto hydrodynamics, stretching sheet, thermal radiation, unsteady flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 226734 Unsteady Natural Convection in a Square Cavity Partially Filled with Porous Media Using a Thermal Non-Equilibrium Model
Authors: Ammar Alsabery, Habibis Saleh, Norazam Arbin, Ishak Hashim
Abstract:
Unsteady natural convection and heat transfer in a square cavity partially filled with porous media using a thermal non-equilibrium model is studied in this paper. The left vertical wall is maintained at a constant hot temperature Th and the right vertical wall is maintained at a constant cold temperature Tc, while the horizontal walls are adiabatic. The governing equations are obtained by applying the Darcy model and Boussinesq approximation. COMSOL’s finite element method is used to solve the non-dimensional governing equations together with specified boundary conditions. The governing parameters of this study are the Rayleigh number (Ra = 10^5, and Ra = 10^6 ), Darcy namber (Da = 10^−2, and Da = 10^−3), the modified thermal conductivity ratio (10^−1 ≤ γ ≤ 10^4), the inter-phase heat transfer coefficien (10^−1 ≤ H ≤ 10^3) and the time dependent (0.001 ≤ τ ≤ 0.2). The results presented for values of the governing parameters in terms of streamlines in both fluid/porous-layer, isotherms of fluid in fluid/porous-layer, isotherms of solid in porous layer, and average Nusselt number.
Keywords: Unsteady natural convection, Thermal non-equilibrium model, Darcy model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 275333 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.
Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137332 Numerical Optimization within Vector of Parameters Estimation in Volatility Models
Authors: J. Arneric, A. Rozga
Abstract:
In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 265031 Comparison of Finite Difference Schemes for Water Flow in Unsaturated Soils
Authors: H. Taheri Shahraiyni, B. Ataie Ashtiani
Abstract:
Flow movement in unsaturated soil can be expressed by a partial differential equation, named Richards equation. The objective of this study is the finding of an appropriate implicit numerical solution for head based Richards equation. Some of the well known finite difference schemes (fully implicit, Crank Nicolson and Runge-Kutta) have been utilized in this study. In addition, the effects of different approximations of moisture capacity function, convergence criteria and time stepping methods were evaluated. Two different infiltration problems were solved to investigate the performance of different schemes. These problems include of vertical water flow in a wet and very dry soils. The numerical solutions of two problems were compared using four evaluation criteria and the results of comparisons showed that fully implicit scheme is better than the other schemes. In addition, utilizing of standard chord slope method for approximation of moisture capacity function, automatic time stepping method and difference between two successive iterations as convergence criterion in the fully implicit scheme can lead to better and more reliable results for simulation of fluid movement in different unsaturated soils.Keywords: Finite Difference methods, Richards equation, fullyimplicit, Crank-Nicolson, Runge-Kutta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 237630 Applying Element Free Galerkin Method on Beam and Plate
Authors: Mahdad M’hamed, Belaidi Idir
Abstract:
This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate holeKeywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243629 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications
Authors: Abdulnasir Hossen, Ulrich Heute
Abstract:
In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166928 The Didactic Transposition in Brazilian High School Physics Textbooks: A Comparative Study of Didactic Materials
Authors: Leandro Marcos Alves Vaz
Abstract:
In this article, we analyze the different approaches to the topic Magnetism of Matter in physics textbooks of Brazilian schools. For this, we compared the approach to the concepts of the magnetic characteristics of materials (diamagnetism, paramagnetism, ferromagnetism and antiferromagnetism) in different sources of information and in different levels of education, from Higher Education to High School. In this sense, we used as reference the theory of the Didactic Transposition of Yves Chevallard, a French educational theorist, who conceived in his theory three types of knowledge – Scholarly Knowledge, Knowledge to be taught and Taught Knowledge – related to teaching practice. As a research methodology, from the reading of the works used in teacher training and those destined to basic education students, we compared the treatment of a higher education physics book, a scientific article published in a Brazilian journal of the educational area, and four high school textbooks, in order to establish in which there is a greater or lesser degree of approximation with the knowledge produced by the scholars – scholarly knowledge – or even with the knowledge to be taught (to that found in books intended for teaching). Thus, we evaluated the level of proximity of the subjects conveyed in high school and higher education, as well as the relevance that some textbook authors give to the theme.Keywords: Magnetism of matter, teaching of physics, didactic transposition, Brazilian physics books.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127327 Vortex-Shedding Suppression in Mixed Convective Flow past a Heated Square Cylinder
Abstract:
The present study investigates numerically the phenomenon of vortex-shedding and its suppression in twodimensional mixed convective flow past a square cylinder under the joint influence of buoyancy and free-stream orientation with respect to gravity. The numerical experiments have been conducted at a fixed Reynolds number (Re) of 100 and Prandtl number (Pr) of 0.71, while Richardson number (Ri) is varied from 0 to 1.6 and freestream orientation, α, is kept in the range 0o≤ α ≤ 90o, with 0o corresponding to an upward flow and 90o representing a cross-flow scenario, respectively. The continuity, momentum and energy equations, subject to Boussinesq approximation, are discretized using a finite difference method and are solved by a semi-explicit pressure correction scheme. The critical Richardson number, leading to the suppression of the vortex-shedding (Ric), is estimated by using Stuart-Landau theory at various free-stream orientations and the neutral curve is obtained in the Ri-α plane. The neutral curve exhibits an interesting non-monotonic behavior with Ric first increasing with increasing values of α upto 45o and then decreasing till 70o. Beyond 70o, the neutral curve again exhibits a sharp increasing asymptotic trend with Ric approaching very large values as α approaches 90o. The suppression of vortex shedding is not observed at α = 90o (cross-flow). In the unsteady flow regime, the Strouhal number (St) increases with the increase in Richardson number.Keywords: bluff body, buoyancy, free-stream orientation, vortex-shedding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 227126 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn
Abstract:
Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.Keywords: Binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73125 Optimal Current Control of Externally Excited Synchronous Machines in Automotive Traction Drive Applications
Authors: Oliver Haala, Bernhard Wagner, Maximilian Hofmann, Martin Marz
Abstract:
The excellent suitability of the externally excited synchronous machine (EESM) in automotive traction drive applications is justified by its high efficiency over the whole operation range and the high availability of materials. Usually, maximum efficiency is obtained by modelling each single loss and minimizing the sum of all losses. As a result, the quality of the optimization highly depends on the precision of the model. Moreover, it requires accurate knowledge of the saturation dependent machine inductances. Therefore, the present contribution proposes a method to minimize the overall losses of a salient pole EESM and its inverter in steady state operation based on measurement data only. Since this method does not require any manufacturer data, it is well suited for an automated measurement data evaluation and inverter parametrization. The field oriented control (FOC) of an EESM provides three current components resp. three degrees of freedom (DOF). An analytic minimization of the copper losses in the stator and the rotor (assuming constant inductances) is performed and serves as a first approximation of how to choose the optimal current reference values. After a numeric offline minimization of the overall losses based on measurement data the results are compared to a control strategy that satisfies cos (ϕ) = 1.
Keywords: Current control, efficiency, externally excited synchronous machine, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 439524 A Hybrid Neural Network and Traditional Approach for Forecasting Lumpy Demand
Authors: A. Nasiri Pour, B. Rostami Tabar, A.Rahimzadeh
Abstract:
Accurate demand forecasting is one of the most key issues in inventory management of spare parts. The problem of modeling future consumption becomes especially difficult for lumpy patterns, which characterized by intervals in which there is no demand and, periods with actual demand occurrences with large variation in demand levels. However, many of the forecasting methods may perform poorly when demand for an item is lumpy. In this study based on the characteristic of lumpy demand patterns of spare parts a hybrid forecasting approach has been developed, which use a multi-layered perceptron neural network and a traditional recursive method for forecasting future demands. In the described approach the multi-layered perceptron are adapted to forecast occurrences of non-zero demands, and then a conventional recursive method is used to estimate the quantity of non-zero demands. In order to evaluate the performance of the proposed approach, their forecasts were compared to those obtained by using Syntetos & Boylan approximation, recently employed multi-layered perceptron neural network, generalized regression neural network and elman recurrent neural network in this area. The models were applied to forecast future demand of spare parts of Arak Petrochemical Company in Iran, using 30 types of real data sets. The results indicate that the forecasts obtained by using our proposed mode are superior to those obtained by using other methods.Keywords: Lumpy Demand, Neural Network, Forecasting, Hybrid Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268023 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170322 Approximation of PE-MOCVD to ALD for TiN Concerning Resistivity and Chemical Composition
Authors: D. Geringswald, B. Hintze
Abstract:
The miniaturization of circuits is advancing. During chip manufacturing, structures are filled for example by metal organic chemical vapor deposition (MOCVD). Since this process reaches its limits in case of very high aspect ratios, the use of alternatives such as the atomic layer deposition (ALD) is possible, requiring the extension of existing coating systems. However, it is an unsolved question to what extent MOCVD can achieve results similar as an ALD process. In this context, this work addresses the characterization of a metal organic vapor deposition of titanium nitride. Based on the current state of the art, the film properties coating thickness, sheet resistance, resistivity, stress and chemical composition are considered. The used setting parameters are temperature, plasma gas ratio, plasma power, plasma treatment time, deposition time, deposition pressure, number of cycles and TDMAT flow. The derived process instructions for unstructured wafers and inside a structure with high aspect ratio include lowering the process temperature and increasing the number of cycles, the deposition and the plasma treatment time as well as the plasma gas ratio of hydrogen to nitrogen (H2:N2). In contrast to the current process configuration, the deposited titanium nitride (TiN) layer is more uniform inside the entire test structure. Consequently, this paper provides approaches to employ the MOCVD for structures with increasing aspect ratios.Keywords: ALD, high aspect ratio, PE-MOCVD, TiN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150721 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137620 A Hybridization of Constructive Beam Search with Local Search for Far From Most Strings Problem
Authors: Sayyed R Mousavi
Abstract:
The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.
Keywords: Bioinformatics, Far From Most Strings Problem, Hybrid metaheuristics, Matheuristics, Sequences consensus problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174219 Computing Entropy for Ortholog Detection
Authors: Hsing-Kuo Pao, John Case
Abstract:
Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.
Keywords: compression, decision tree, entropy, ortholog, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182718 Evaluation of Non-Staggered Body-Fitted Grid Based Solution Method in Application to Supercritical Fluid Flows
Authors: Suresh Sahu, Abhijeet M. Vaidya, Naresh K. Maheshwari
Abstract:
The efforts to understand the heat transfer behavior of supercritical water in supercritical water cooled reactor (SCWR) are ongoing worldwide to fulfill the future energy demand. The higher thermal efficiency of these reactors compared to a conventional nuclear reactor is one of the driving forces for attracting the attention of nuclear scientists. In this work, a solution procedure has been described for solving supercritical fluid flow problems in complex geometries. The solution procedure is based on non-staggered grid. All governing equations are discretized by finite volume method (FVM) in curvilinear coordinate system. Convective terms are discretized by first-order upwind scheme and central difference approximation has been used to discretize the diffusive parts. k-ε turbulence model with standard wall function has been employed. SIMPLE solution procedure has been implemented for the curvilinear coordinate system. Based on this solution method, 3-D Computational Fluid Dynamics (CFD) code has been developed. In order to demonstrate the capability of this CFD code in supercritical fluid flows, heat transfer to supercritical water in circular tubes has been considered as a test problem. Results obtained by code have been compared with experimental results reported in literature.
Keywords: Curvilinear coordinate, body-fitted mesh, momentum interpolation, non-staggered grid, supercritical fluids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67117 A Development of the Multiple Intelligences Measurement of Elementary Students
Authors: Chaiwat Waree
Abstract:
This research aims at development of the Multiple Intelligences Measurement of Elementary Students. The structural accuracy test and normality establishment are based on the Multiple Intelligences Theory of Gardner. This theory consists of eight aspects namely linguistics, logic and mathematics, visual-spatial relations, body and movement, music, human relations, self-realization/selfunderstanding and nature. The sample used in this research consists of elementary school students (aged between 5-11 years). The size of the sample group was determined by Yamane Table. The group has 2,504 students. Multistage Sampling was used. Basic statistical analysis and construct validity testing were done using confirmatory factor analysis. The research can be summarized as follows; 1. Multiple Intelligences Measurement consisting of 120 items is content-accurate. Internal consistent reliability according to the method of Kuder-Richardson of the whole Multiple Intelligences Measurement equals .91. The difficulty of the measurement test is between .39-.83. Discrimination is between .21-.85. 2). The Multiple Intelligences Measurement has construct validity in a good range, that is 8 components and all 120 test items have statistical significance level at .01. Chi-square value equals 4357.7; p=.00 at the degree of freedom of 244 and Goodness of Fit Index equals 1.00. Adjusted Goodness of Fit Index equals .92. Comparative Fit Index (CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064 and Root Mean Square Error of Approximation equals 0.82. 3). The normality of the Multiple Intelligences Measurement is categorized into 3 levels. Those with high intelligence are those with percentiles of more than 78. Those with moderate/medium intelligence are those with percentiles between 24 and 77.9. Those with low intelligence are those with percentiles from 23.9 downwards.
Keywords: Multiple Intelligences, Measurement, Elementary Students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295816 Numerical Approach to a Mathematical Modeling of Bioconvection Due to Gyrotactic Micro-Organisms over a Nonlinear Inclined Stretching Sheet
Authors: Madhu Aneja, Sapna Sharma
Abstract:
The water-based bioconvection of a nanofluid containing motile gyrotactic micro-organisms over nonlinear inclined stretching sheet has been investigated. The governing nonlinear boundary layer equations of the model are reduced to a system of ordinary differential equations via Oberbeck-Boussinesq approximation and similarity transformations. Further, the modified set of equations with associated boundary conditions are solved using Finite Element Method. The impact of various pertinent parameters on the velocity, temperature, nanoparticles concentration, density of motile micro-organisms profiles are obtained and analyzed in details. The results show that with the increase in angle of inclination δ, velocity decreases while temperature, nanoparticles concentration, a density of motile micro-organisms increases. Additionally, the skin friction coefficient, Nusselt number, Sherwood number, density number are computed for various thermophysical parameters. It is noticed that increasing Brownian motion and thermophoresis parameter leads to an increase in temperature of fluid which results in a reduction in Nusselt number. On the contrary, Sherwood number rises with an increase in Brownian motion and thermophoresis parameter. The findings have been validated by comparing the results of special cases with existing studies.Keywords: Bioconvection, inclined stretching sheet, Gyrotactic micro-organisms, Brownian motion, thermophoresis, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72215 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.
Keywords: Composite material, crashworthiness, finite element analysis, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112914 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained.
Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61913 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images
Authors: Sumathi Poobal, G. Ravindran
Abstract:
Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174012 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network
Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss
Abstract:
The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188911 Data Hiding in Images in Discrete Wavelet Domain Using PMM
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2852