Search results for: density approximation.
923 Advanced Micromanufacturing for Ultra Precision Part by Soft Lithography and Nano Powder Injection Molding
Authors: Andy Tirta, Yus Prasetyo, Eung-Ryul. Baek, Chul-Jin. Choi , Hye-Moon. Lee
Abstract:
Recently, the advanced technologies that offer high precision product, relative easy, economical process and also rapid production are needed to realize the high demand of ultra precision micro part. In our research, micromanufacturing based on soft lithography and nanopowder injection molding was investigated. The silicone metal pattern with ultra thick and high aspect ratio succeeds to fabricate Polydimethylsiloxane (PDMS) micro mold. The process followed by nanopowder injection molding (PIM) by a simple vacuum hot press. The 17-4ph nanopowder with diameter of 100 nm, succeed to be injected and it forms green sample microbearing with thickness, microchannel and aspect ratio is 700μm, 60μm and 12, respectively. Sintering process was done in 1200 C for 2 hours and heating rate 0.83oC/min. Since low powder load (45% PL) was applied to achieve green sample fabrication, ~15% shrinkage happen in the 86% relative density. Several improvements should be done to produce high accuracy and full density sintered part.Keywords: Micromanufacturing, Nano PIM, PDMS micro mould.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062922 FEM Simulation of HE Blast-Fragmentation Warhead and the Calculation of Lethal Range
Authors: G. Tanapornraweekit, W. Kulsirikasem
Abstract:
This paper presents the simulation of fragmentation warhead using a hydrocode, Autodyn. The goal of this research is to determine the lethal range of such a warhead. This study investigates the lethal range of warheads with and without steel balls as preformed fragments. The results from the FE simulation, i.e. initial velocities and ejected spray angles of fragments, are further processed using an analytical approach so as to determine a fragment hit density and probability of kill of a modelled warhead. In order to simulate a plenty of preformed fragments inside a warhead, the model requires expensive computation resources. Therefore, this study attempts to model the problem in an alternative approach by considering an equivalent mass of preformed fragments to the mass of warhead casing. This approach yields approximately 7% and 20% difference of fragment velocities from the analytical results for one and two layers of preformed fragments, respectively. The lethal ranges of the simulated warheads are 42.6 m and 56.5 m for warheads with one and two layers of preformed fragments, respectively, compared to 13.85 m for a warhead without preformed fragment. These lethal ranges are based on the requirement of fragment hit density. The lethal ranges which are based on the probability of kill are 27.5 m, 61 m and 70 m for warheads with no preformed fragment, one and two layers of preformed fragments, respectively.Keywords: Lethal Range, Natural Fragment, Preformed Fragment, Warhead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4309921 A Markov Chain Approximation for ATS Modeling for the Variable Sampling Interval CCC Control Charts
Authors: Y. K. Chen, K. C. Chiou, C. Y. Chen
Abstract:
The cumulative conformance count (CCC) charts are widespread in process monitoring of high-yield manufacturing. Recently, it is found the use of variable sampling interval (VSI) scheme could further enhance the efficiency of the standard CCC charts. The average time to signal (ATS) a shift in defect rate has become traditional measure of efficiency of a chart with the VSI scheme. Determining the ATS is frequently a difficult and tedious task. A simple method based on a finite Markov Chain approach for modeling the ATS is developed. In addition, numerical results are given.Keywords: Cumulative conformance count, variable sampling interval, Markov Chain, average time to signal, control chart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524920 An Implicit Region-Based Deformable Model with Local Segmentation Applied to Weld Defects Extraction
Authors: Y. Boutiche, N. Ramou, M. Ben Gharsallah
Abstract:
This paper is devoted to present and discuss a model that allows a local segmentation by using statistical information of a given image. It is based on Chan-Vese model, curve evolution, partial differential equations and binary level sets method. The proposed model uses the piecewise constant approximation of Chan-Vese model to compute Signed Pressure Force (SPF) function, this one attracts the curve to the true object(s)-s boundaries. The implemented model is used to extract weld defects from weld radiographic images in the aim to calculate the perimeter and surfaces of those weld defects; encouraged resultants are obtained on synthetic and real radiographic images.
Keywords: Active contour, Chan-Vese Model, local segmentation, weld radiographic images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505919 Effects of Four Dietary Oils on Cholesterol and Fatty Acid Composition of Egg Yolk in Layers
Authors: A. F. Agboola, B. R. O. Omidiwura, A. Oyeyemi, E. A. Iyayi, A. S. Adelani
Abstract:
Dietary cholesterol has elicited the most public interest as it relates with coronary heart disease. Thus, humans have been paying more attention to health, thereby reducing consumption of cholesterol enriched food. Egg is considered as one of the major sources of human dietary cholesterol. However, an alternative way to reduce the potential cholesterolemic effect of eggs is to modify the fatty acid composition of the yolk. The effect of palm oil (PO), soybean oil (SO), sesame seed oil (SSO) and fish oil (FO) supplementation in the diets of layers on egg yolk fatty acid, cholesterol, egg production and egg quality parameters were evaluated in a 42-day feeding trial. One hundred and five Isa Brown laying hens of 34 weeks of age were randomly distributed into seven groups of five replicates and three birds per replicate in a completely randomized design. Seven corn-soybean basal diets (BD) were formulated: BD+No oil (T1), BD+1.5% PO (T2), BD+1.5% SO (T3), BD+1.5% SSO (T4), BD+1.5% FO (T5), BD+0.75% SO+0.75% FO (T6) and BD+0.75% SSO+0.75% FO (T7). Five eggs were randomly sampled at day 42 from each replicate to assay for the cholesterol, fatty acid profile of egg yolk and egg quality assessment. Results showed that there were no significant (P>0.05) differences observed in production performance, egg cholesterol and egg quality parameters except for yolk height, albumen height, yolk index, egg shape index, haugh unit, and yolk colour. There were no significant differences (P>0.05) observed in total cholesterol, high density lipoprotein and low density lipoprotein levels of egg yolk across the treatments. However, diets had effect (P<0.05) on TAG (triacylglycerol) and VLDL (very low density lipoprotein) of the egg yolk. The highest TAG (603.78 mg/dl) and VLDL values (120.76 mg/dl) were recorded in eggs of hens on T4 (1.5% sesame seed oil) and was similar to those on T3 (1.5% soybean oil), T5 (1.5% fish oil) and T6 (0.75% soybean oil + 0.75% fish oil). However, results revealed a significant (P<0.05) variations on eggs’ summation of polyunsaturated fatty acid (PUFA). In conclusion, it is suggested that dietary oils could be included in layers’ diets to produce designer eggs low in cholesterol and high in PUFA especially omega-3 fatty acids.Keywords: Dietary oils, Egg cholesterol, Egg fatty acid profile, Egg quality parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2080918 Production of Spherical Cementite within Bainitic Matrix Microstructures in High Carbon Powder Metallurgy Steels
Authors: O. Altuntaş, A. Güral
Abstract:
The hardness-microstructure relationships of spherical cementite in bainitic matrix obtained by a different heat treatment cycles carried out to high carbon powder metallurgy (P/M) steel were investigated. For this purpose, 1.5 wt.% natural graphite powder admixed in atomized iron powders and the mixed powders were compacted under 700 MPa at room temperature and then sintered at 1150 °C under a protective argon gas atmosphere. The densities of the green and sintered samples were measured via the Archimedes method. A density of 7.4 g/cm3 was obtained after sintering and a density of 94% was achieved. The sintered specimens having primary cementite plus lamellar pearlitic structures were fully quenched from 950 °C temperature and then over-tempered at 705 °C temperature for 60 minutes to produce spherical-fine cementite particles in the ferritic matrix. After by this treatment, these samples annealed at 735 °C temperature for 3 minutes were austempered at 300 °C salt bath for a period of 1 to 5 hours. As a result of this process, it could be able to produced spherical cementite particle in the bainitic matrix. This microstructure was designed to improve wear and toughness of P/M steels. The microstructures were characterized and analyzed by SEM and micro and macro hardness.
Keywords: Powder metallurgy steel, heat treatment, bainite, spherical cementite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996917 On Finite Wordlength Properties of Block-Floating-Point Arithmetic
Authors: Abhijit Mitra
Abstract:
A special case of floating point data representation is block floating point format where a block of operands are forced to have a joint exponent term. This paper deals with the finite wordlength properties of this data format. The theoretical errors associated with the error model for block floating point quantization process is investigated with the help of error distribution functions. A fast and easy approximation formula for calculating signal-to-noise ratio in quantization to block floating point format is derived. This representation is found to be a useful compromise between fixed point and floating point format due to its acceptable numerical error properties over a wide dynamic range.Keywords: Block floating point, Roundoff error, Block exponent dis-tribution fuction, Signal factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014916 Image Enhancement of Medical Images using Gabor Filter Bank on Hexagonal Sampled Grids
Authors: Veni.S , K.A.Narayanankutty
Abstract:
For about two decades scientists have been developing techniques for enhancing the quality of medical images using Fourier transform, DWT (Discrete wavelet transform),PDE model etc., Gabor wavelet on hexagonal sampled grid of the images is proposed in this work. This method has optimal approximation theoretic performances, for a good quality image. The computational cost is considerably low when compared to similar processing in the rectangular domain. As X-ray images contain light scattered pixels, instead of unique sigma, the parameter sigma of 0.5 to 3 is found to satisfy most of the image interpolation requirements in terms of high Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error (MSE) and better image quality by adopting windowing technique.Keywords: Hexagonal lattices, Gabor filter, Interpolation, imageprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2742915 2D Rigid Registration of MR Scans using the 1d Binary Projections
Authors: Panos D. Kotsas
Abstract:
This paper presents the application of a signal intensity independent registration criterion for 2D rigid body registration of medical images using 1D binary projections. The criterion is defined as the weighted ratio of two projections. The ratio is computed on a pixel per pixel basis and weighting is performed by setting the ratios between one and zero pixels to a standard high value. The mean squared value of the weighted ratio is computed over the union of the one areas of the two projections and it is minimized using the Chebyshev polynomial approximation using n=5 points. The sum of x and y projections is used for translational adjustment and a 45deg projection for rotational adjustment. 20 T1- T2 registration experiments were performed and gave mean errors 1.19deg and 1.78 pixels. The method is suitable for contour/surface matching. Further research is necessary to determine the robustness of the method with regards to threshold, shape and missing data.Keywords: Medical image, projections, registration, rigid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346914 Increasing Performance of Autopilot Guided Small Unmanned Helicopter
Authors: Tugrul Oktay, Mehmet Konar, Mustafa Soylak, Firat Sal, Murat Onay, Orhan Kizilkaya
Abstract:
In this paper, autonomous performance of a small manufactured unmanned helicopter is tried to be increased. For this purpose, a small unmanned helicopter is manufactured in Erciyes University, Faculty of Aeronautics and Astronautics. It is called as ZANKA-Heli-I. For performance maximization, autopilot parameters are determined via minimizing a cost function consisting of flight performance parameters such as settling time, rise time, overshoot during trajectory tracking. For this purpose, a stochastic optimization method named as simultaneous perturbation stochastic approximation is benefited. Using this approach, considerable autonomous performance increase (around %23) is obtained.Keywords: Small helicopters, hierarchical control, stochastic optimization, autonomous performance maximization, autopilots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636913 Generalization of Clustering Coefficient on Lattice Networks Applied to Criminal Networks
Authors: Christian H. Sanabria-Montaña, Rodrigo Huerta-Quintanilla
Abstract:
A lattice network is a special type of network in which all nodes have the same number of links, and its boundary conditions are periodic. The most basic lattice network is the ring, a one-dimensional network with periodic border conditions. In contrast, the Cartesian product of d rings forms a d-dimensional lattice network. An analytical expression currently exists for the clustering coefficient in this type of network, but the theoretical value is valid only up to certain connectivity value; in other words, the analytical expression is incomplete. Here we obtain analytically the clustering coefficient expression in d-dimensional lattice networks for any link density. Our analytical results show that the clustering coefficient for a lattice network with density of links that tend to 1, leads to the value of the clustering coefficient of a fully connected network. We developed a model on criminology in which the generalized clustering coefficient expression is applied. The model states that delinquents learn the know-how of crime business by sharing knowledge, directly or indirectly, with their friends of the gang. This generalization shed light on the network properties, which is important to develop new models in different fields where network structure plays an important role in the system dynamic, such as criminology, evolutionary game theory, econophysics, among others.Keywords: Clustering coefficient, criminology, generalized, regular network d-dimensional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636912 Approximated Solutions of Two-Point Nonlinear Boundary Problem by a Combination of Taylor Series Expansion and Newton Raphson Method
Authors: Chinwendu. B. Eleje, Udechukwu P. Egbuhuzor
Abstract:
One of the difficulties encountered in solving nonlinear Boundary Value Problems (BVP) by many researchers is finding approximated solutions with minimum deviations from the exact solutions without so much rigor and complications. In this paper, we propose an approach to solve a two point BVP which involves a combination of Taylor series expansion method and Newton Raphson method. Furthermore, the fourth and sixth order approximated solutions are obtained and we compare their relative error and rate of convergence to the exact solution. Finally, some numerical simulations are presented to show the behavior of the solution and its derivatives.
Keywords: Newton Raphson method, non-linear boundary value problem, Taylor series approximation, Michaelis-Menten equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 325911 A New Reliability Allocation Method Based On Fuzzy Numbers
Authors: Peng Li, Chuanri Li, Tao Li
Abstract:
Reliability allocation is quite important during early design and development stages for a system to apportion its specified reliability goal to subsystems. This paper improves the reliability fuzzy allocation method, and gives concrete processes on determining the factor and sub-factor sets, weight sets, judgment set, and multi-stage fuzzy evaluation. To determine the weight of factor and sub-factor sets, the modified trapezoidal numbers are proposed to reduce errors caused by subjective factors. To decrease the fuzziness in fuzzy division, an approximation method based on linear programming is employed. To compute the explicit values of fuzzy numbers, centroid method of defuzzification is considered. An example is provided to illustrate the application of the proposed reliability allocation method based on fuzzy arithmetic.
Keywords: Reliability allocation, fuzzy arithmetic, allocation weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3329910 A Comparative Study of High Order Rotated Group Iterative Schemes on Helmholtz Equation
Authors: Norhashidah Hj. Mohd Ali, Teng Wai Ping
Abstract:
In this paper, we present a high order group explicit method in solving the two dimensional Helmholtz equation. The presented method is derived from a nine-point fourth order finite difference approximation formula obtained from a 45-degree rotation of the standard grid which makes it possible for the construction of iterative procedure with reduced complexity. The developed method will be compared with the existing group iterative schemes available in literature in terms of computational time, iteration counts, and computational complexity. The comparative performances of the methods will be discussed and reported.Keywords: Explicit group method, finite difference, Helmholtz equation, rotated grid, standard grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166909 District 10 in Tehran: Urban Transformation and the Survey Evidence of Loss in Place Attachment in High Rises
Authors: Roya Morad, W. Eirik Heintz
Abstract:
The identity of a neighborhood is inevitably shaped by the architecture and the people of that place. Conventionally the streets within each neighborhood served as a semi-public-private extension of the private living spaces. The street as a design element formed a hybrid condition that was neither totally public nor private, and it encouraged social interactions. Thus through creating a sense of community, one of the most basic human needs of belonging was achieved. Similar to major global cities, Tehran has undergone serious urbanization. Developing into a capital city of high rises has resulted in an increase in urban density. Although allocating more residential units in each neighborhood was a critical response to the population boom and the limited land area of the city, it also created a crisis in terms of social communication and place attachment. District 10 in Tehran is a neighborhood that has undergone the most urban transformation among the other 22 districts in the capital and currently has the highest population density. This paper will explore how the active streets in district 10 have changed into their current condition of high rises with a lack of meaningful social interactions amongst its inhabitants. A residential building can be thought of as a large group of people. One would think that as the number of people increases, the opportunities for social communications would increase as well. However, according to the survey, there is an indirect relationship between the two. As the number of people of a residential building increases, the quality of each acquaintance reduces, and the depth of relationships between people tends to decrease. This comes from the anonymity of being part of a crowd and the lack of social spaces characterized by most high-rise apartment buildings. Without a sense of community, the attachment to a neighborhood is decreased. This paper further explores how the neighborhood participates to fulfill ones need for social interaction and focuses on the qualitative aspects of alternative spaces that can redevelop the sense of place attachment within the community.
Keywords: High density, place attachment, social communication, street life, urban transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514908 Numerical Solution for Integro-Differential Equations by Using Quartic B-Spline Wavelet and Operational Matrices
Authors: Khosrow Maleknejad, Yaser Rostami
Abstract:
In this paper, Semi-orthogonal B-spline scaling functions and wavelets and their dual functions are presented to approximate the solutions of integro-differential equations.The B-spline scaling functions and wavelets, their properties and the operational matrices of derivative for this function are presented to reduce the solution of integro-differential equations to the solution of algebraic equations. Here we compute B-spline scaling functions of degree 4 and their dual, then we will show that by using them we have better approximation results for the solution of integro-differential equations in comparison with less degrees of scaling functions
Keywords: Integro-differential equations, Quartic B-spline wavelet, Operational matrices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3150907 Estimation of Forest Fire Emission in Thailand by Using Remote Sensing Information
Authors: A. Junpen, S. Garivait, S. Bonnet, A. Pongpullponsak
Abstract:
The forest fires in Thailand are annual occurrence which is the cause of air pollutions. This study intended to estimate the emission from forest fire during 2005-2009 using MODerateresolution Imaging Spectro-radiometer (MODIS) sensor aboard the Terra and Aqua satellites, experimental data, and statistical data. The forest fire emission is estimated using equation established by Seiler and Crutzen in 1982. The spatial and temporal variation of forest fire emission is analyzed and displayed in the form of grid density map. From the satellite data analysis suggested between 2005 and 2009, the number of fire hotspots occurred 86,877 fire hotspots with a significant highest (more than 80% of fire hotspots) in the deciduous forest. The peak period of the forest fire is in January to May. The estimation on the emissions from forest fires during 2005 to 2009 indicated that the amount of CO, CO2, CH4, and N2O was about 3,133,845 tons, 47,610.337 tons, 204,905 tons, and 6,027 tons, respectively, or about 6,171,264 tons of CO2eq. They also emitted 256,132 tons of PM10. The year 2007 was found to be the year when the emissions were the largest. Annually, March is the period that has the maximum amount of forest fire emissions. The areas with high density of forest fire emission were the forests situated in the northern, the western, and the upper northeastern parts of the country.
Keywords: Emissions, Forest fire, Remote sensing information.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194906 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model
Authors: Zina Benouaret, Djamil Aissani
Abstract:
In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887905 Modified Functional Link Artificial Neural Network
Authors: Ashok Kumar Goel, Suresh Chandra Saxena, Surekha Bhanot
Abstract:
In this work, a Modified Functional Link Artificial Neural Network (M-FLANN) is proposed which is simpler than a Multilayer Perceptron (MLP) and improves upon the universal approximation capability of Functional Link Artificial Neural Network (FLANN). MLP and its variants: Direct Linear Feedthrough Artificial Neural Network (DLFANN), FLANN and M-FLANN have been implemented to model a simulated Water Bath System and a Continually Stirred Tank Heater (CSTH). Their convergence speed and generalization ability have been compared. The networks have been tested for their interpolation and extrapolation capability using noise-free and noisy data. The results show that M-FLANN which is computationally cheap, performs better and has greater generalization ability than other networks considered in the work.Keywords: DLFANN, FLANN, M-FLANN, MLP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803904 An Effective Algorithm for Minimum Weighted Vertex Cover Problem
Authors: S. Balaji, V. Swaminathan, K. Kannan
Abstract:
The Minimum Weighted Vertex Cover (MWVC) problem is a classic graph optimization NP - complete problem. Given an undirected graph G = (V, E) and weighting function defined on the vertex set, the minimum weighted vertex cover problem is to find a vertex set S V whose total weight is minimum subject to every edge of G has at least one end point in S. In this paper an effective algorithm, called Support Ratio Algorithm (SRA), is designed to find the minimum weighted vertex cover of a graph. Computational experiments are designed and conducted to study the performance of our proposed algorithm. Extensive simulation results show that the SRA can yield better solutions than other existing algorithms found in the literature for solving the minimum vertex cover problem.
Keywords: Weighted vertex cover, vertex support, approximation algorithms, NP-complete problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3882903 Using Fuzzy Controller in Induction Motor Speed Control with Constant Flux
Authors: Hassan Baghgar Bostan Abad, Ali Yazdian Varjani, Taheri Asghar
Abstract:
Variable speed drives are growing and varying. Drives expanse depend on progress in different part of science like power system, microelectronic, control methods, and so on. Artificial intelligent contains hard computation and soft computation. Artificial intelligent has found high application in most nonlinear systems same as motors drive. Because it has intelligence like human but there are no sentimental against human like angriness and.... Artificial intelligent is used for various points like approximation, control, and monitoring. Because artificial intelligent techniques can use as controller for any system without requirement to system mathematical model, it has been used in electrical drive control. With this manner, efficiency and reliability of drives increase and volume, weight and cost of them decrease.
Keywords: Artificial intelligent, electrical motor, intelligent drive and control,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484902 An EWMA p Chart Based On Improved Square Root Transformation
Authors: S. Sukparungsee
Abstract:
Generally, the traditional Shewhart p chart has been developed by for charting the binomial data. This chart has been developed using the normal approximation with condition as low defect level and the small to moderate sample size. In real applications, however, are away from these assumptions due to skewness in the exact distribution. In this paper, a modified Exponentially Weighted Moving Average (EWMA) control chat for detecting a change in binomial data by improving square root transformations, namely ISRT p EWMA control chart. The numerical results show that ISRT p EWMA chart is superior to ISRT p chart for small to moderate shifts, otherwise, the latter is better for large shifts.
Keywords: Number of defects, Exponentially Weighted Moving Average, Average Run Length, Square root transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2485901 Effect of Inclusions on the Shape and Size of Crack Tip Plastic Zones by Element Free Galerkin Method
Authors: A. Jameel, G. A. Harmain, Y. Anand, J. H. Masoodi, F. A. Najar
Abstract:
The present study investigates the effect of inclusions on the shape and size of crack tip plastic zones in engineering materials subjected to static loads by employing the element free Galerkin method (EFGM). The modeling of the discontinuities produced by cracks and inclusions becomes independent of the grid chosen for analysis. The standard displacement approximation is modified by adding additional enrichment functions, which introduce the effects of different discontinuities into the formulation. The level set method has been used to represent different discontinuities present in the domain. The effect of inclusions on the extent of crack tip plastic zones is investigated by solving some numerical problems by the EFGM.
Keywords: EFGM, stress intensity factors, crack tip plastic zones, inclusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886900 A Special Algorithm to Approximate the Square Root of Positive Integer
Authors: Hsian Ming Goo
Abstract:
The paper concerns a special approximate algorithm of the square root of the specific positive integer, which is built by the use of the property of positive integer solution of the Pell’s equation, together with using some elementary theorems of matrices, and then takes it to compare with general used the Newton’s method and give a practical numerical example and error analysis; it is unexpected to find its special property: the significant figure of the approximation value of the square root of positive integer will increase one digit by one. It is well useful in some occasions.
Keywords: Special approximate algorithm, square root, Pell’s equation, Newton’s method, error analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2802899 Frequency-Energy Characteristics of Local Earthquakes using Discrete Wavelet Transform(DWT)
Authors: O. H. Colak, T. C. Destici, S. Ozen, H. Arman, O. Cerezci
Abstract:
The wavelet transform is one of the most important method used in signal processing. In this study, we have introduced frequency-energy characteristics of local earthquakes using discrete wavelet transform. Frequency-energy characteristic was analyzed depend on difference between P and S wave arrival time and noise within records. We have found that local earthquakes have similar characteristics. If frequency-energy characteristics can be found accurately, this gives us a hint to calculate P and S wave arrival time. It can be seen that wavelet transform provides successful approximation for this. In this study, 100 earthquakes with 500 records were analyzed approximately.Keywords: Discrete Wavelet Transform, Frequency-EnergyCharacteristics, P and S waves arrival time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269898 High Accuracy Eigensolutions in Elasticity for Boundary Integral Equations by Nyström Method
Authors: Pan Cheng, Jin Huang, Guang Zeng
Abstract:
Elastic boundary eigensolution problems are converted into boundary integral equations by potential theory. The kernels of the boundary integral equations have both the logarithmic and Hilbert singularity simultaneously. We present the mechanical quadrature methods for solving eigensolutions of the boundary integral equations by dealing with two kinds of singularities at the same time. The methods possess high accuracy O(h3) and low computing complexity. The convergence and stability are proved based on Anselone-s collective compact theory. Bases on the asymptotic error expansion with odd powers, we can greatly improve the accuracy of the approximation, and also derive a posteriori error estimate which can be used for constructing self-adaptive algorithms. The efficiency of the algorithms are illustrated by numerical examples.Keywords: boundary integral equation, extrapolation algorithm, aposteriori error estimate, elasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3645897 A Pull-out Fiber/Matrix Interface Characterization of Vegetal Fibers Reinforced Thermoplastic Polymer Composites: The Influence of the Processing Temperature
Authors: Duy Cuong Nguyen, Ali Makke, Guillaume Montay
Abstract:
This work presents an improved single fiber pull-out test for fiber/matrix interface characterization. This test has been used to study the Inter-Facial Shear Strength ‘IFSS’ of hemp fibers reinforced polypropylene (PP). For this aim, the fiber diameter has been carefully measured using a tomography inspired method. The fiber section contour can then be approximated by a circle or a polygon. The results show that the IFSS is overestimated if the circular approximation is used. The Influence of the molding temperature on the IFSS has also been studied. We find that a molding temperature of 183◦C leads to better interfacial properties. Above or below this temperature the interface strength is reduced.Keywords: Interface, pull-out, processing, temperature, hemp, polypropylene, composite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097896 The Estimate Rate of Permanent Flow of a Liquid Simulating Blood by Doppler Effect
Authors: Malika.D Kedir-Talha, Mohammed Mehenni
Abstract:
To improve the characterization of blood flows, we propose a method which makes it possible to use the spectral analysis of the Doppler signals. Our calculation induces a reasonable approximation, the error made on estimated speed reflects the fact that speed depends on the flow conditions as well as on measurement parameters like the bore and the volume flow rate. The estimate of the Doppler signal frequency enables us to determine the maximum Doppler frequencie Fd max as well as the maximum flow speed. The results show that the difference between the estimated frequencies ( Fde ) and the Doppler frequencies ( Fd ) is small, this variation tends to zero for important θ angles and it is proportional to the diameter D. The description of the speed of friction and the coefficient of friction justify the error rate obtained.Keywords: Doppler frequency, Doppler spectrum, estimate speed, permanent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340895 Normalizing Logarithms of Realized Volatility in an ARFIMA Model
Authors: G. L. C. Yap
Abstract:
Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.
Keywords: Long-memory, Gaussian process, Whittle estimator, normalization, volatility, value-at-risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688894 Modeling and Simulation for 3D Eddy Current Testing in Conducting Materials
Authors: S. Bennoud, M. Zergoug
Abstract:
The numerical simulation of electromagnetic interactions is still a challenging problem, especially in problems that result in fully three dimensional mathematical models.
The goal of this work is to use mathematical modeling to characterize the reliability and capacity of eddy current technique to detect and characterize defects embedded in aeronautical in-service pieces.
The finite element method is used for describing the eddy current technique in a mathematical model by the prediction of the eddy current interaction with defects. However, this model is an approximation of the full Maxwell equations.
In this study, the analysis of the problem is based on a three dimensional finite element model that computes directly the electromagnetic field distortions due to defects.
Keywords: Eddy current, Finite element method, Non destructive testing, Numerical simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3141