Search results for: approximation algorithms
1107 Image Enhancement of Medical Images using Gabor Filter Bank on Hexagonal Sampled Grids
Authors: Veni.S , K.A.Narayanankutty
Abstract:
For about two decades scientists have been developing techniques for enhancing the quality of medical images using Fourier transform, DWT (Discrete wavelet transform),PDE model etc., Gabor wavelet on hexagonal sampled grid of the images is proposed in this work. This method has optimal approximation theoretic performances, for a good quality image. The computational cost is considerably low when compared to similar processing in the rectangular domain. As X-ray images contain light scattered pixels, instead of unique sigma, the parameter sigma of 0.5 to 3 is found to satisfy most of the image interpolation requirements in terms of high Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error (MSE) and better image quality by adopting windowing technique.Keywords: Hexagonal lattices, Gabor filter, Interpolation, imageprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27421106 2D Rigid Registration of MR Scans using the 1d Binary Projections
Authors: Panos D. Kotsas
Abstract:
This paper presents the application of a signal intensity independent registration criterion for 2D rigid body registration of medical images using 1D binary projections. The criterion is defined as the weighted ratio of two projections. The ratio is computed on a pixel per pixel basis and weighting is performed by setting the ratios between one and zero pixels to a standard high value. The mean squared value of the weighted ratio is computed over the union of the one areas of the two projections and it is minimized using the Chebyshev polynomial approximation using n=5 points. The sum of x and y projections is used for translational adjustment and a 45deg projection for rotational adjustment. 20 T1- T2 registration experiments were performed and gave mean errors 1.19deg and 1.78 pixels. The method is suitable for contour/surface matching. Further research is necessary to determine the robustness of the method with regards to threshold, shape and missing data.Keywords: Medical image, projections, registration, rigid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13461105 Increasing Performance of Autopilot Guided Small Unmanned Helicopter
Authors: Tugrul Oktay, Mehmet Konar, Mustafa Soylak, Firat Sal, Murat Onay, Orhan Kizilkaya
Abstract:
In this paper, autonomous performance of a small manufactured unmanned helicopter is tried to be increased. For this purpose, a small unmanned helicopter is manufactured in Erciyes University, Faculty of Aeronautics and Astronautics. It is called as ZANKA-Heli-I. For performance maximization, autopilot parameters are determined via minimizing a cost function consisting of flight performance parameters such as settling time, rise time, overshoot during trajectory tracking. For this purpose, a stochastic optimization method named as simultaneous perturbation stochastic approximation is benefited. Using this approach, considerable autonomous performance increase (around %23) is obtained.Keywords: Small helicopters, hierarchical control, stochastic optimization, autonomous performance maximization, autopilots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16361104 Investigation about Structural and Optical Properties of Bulk and Thin Film of 1H-CaAlSi by Density Functional Method
Authors: M. Babaeipour, M. Vejdanihemmat
Abstract:
Optical properties of bulk and thin film of 1H-CaAlSi for two directions (1,0,0) and (0,0,1) were studied. The calculations are carried out by Density Functional Theory (DFT) method using full potential. GGA approximation was used to calculate exchange-correlation energy. The calculations are performed by WIEN2k package. The results showed that the absorption edge is shifted backward 0.82eV in the thin film than the bulk for both directions. The static values of the real part of dielectric function for four cases were obtained. The static values of the refractive index for four cases are calculated too. The reflectivity graphs have shown an intensive difference between the reflectivity of the thin film and the bulk in the ultraviolet region.
Keywords: 1H-CaAlSi, absorption, bulk, optical, thin film.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9091103 Approximated Solutions of Two-Point Nonlinear Boundary Problem by a Combination of Taylor Series Expansion and Newton Raphson Method
Authors: Chinwendu. B. Eleje, Udechukwu P. Egbuhuzor
Abstract:
One of the difficulties encountered in solving nonlinear Boundary Value Problems (BVP) by many researchers is finding approximated solutions with minimum deviations from the exact solutions without so much rigor and complications. In this paper, we propose an approach to solve a two point BVP which involves a combination of Taylor series expansion method and Newton Raphson method. Furthermore, the fourth and sixth order approximated solutions are obtained and we compare their relative error and rate of convergence to the exact solution. Finally, some numerical simulations are presented to show the behavior of the solution and its derivatives.
Keywords: Newton Raphson method, non-linear boundary value problem, Taylor series approximation, Michaelis-Menten equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3261102 Integrating Computational Intelligence Techniques and Assessment Agents in ELearning Environments
Authors: Konstantinos C. Giotopoulos, Christos E. Alexakos, Grigorios N. Beligiannis, Spiridon D.Likothanassis
Abstract:
In this contribution an innovative platform is being presented that integrates intelligent agents and evolutionary computation techniques in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting: I) various assessment agents for e-learning environments, II) a specific resource retrieval agent for the provision of additional information from Internet sources matching the needs and profile of the specific user and III) a genetic algorithm designed to extract efficient information (classifying rules) based on the students- answering input data. The agents are implemented in order to provide intelligent assessment services based on computational intelligence techniques such as Bayesian Networks and Genetic Algorithms. The proposed Genetic Algorithm (GA) is used in order to extract efficient information (classifying rules) based on the students- answering input data. The idea of using a GA in order to fulfil this difficult task came from the fact that GAs have been widely used in applications including classification of unknown data. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.Keywords: Bayesian Networks, Computational Intelligencetechniques, E-learning legacy systems, Service Oriented Integration, Intelligent Agents, Genetic Algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17441101 A New Reliability Allocation Method Based On Fuzzy Numbers
Authors: Peng Li, Chuanri Li, Tao Li
Abstract:
Reliability allocation is quite important during early design and development stages for a system to apportion its specified reliability goal to subsystems. This paper improves the reliability fuzzy allocation method, and gives concrete processes on determining the factor and sub-factor sets, weight sets, judgment set, and multi-stage fuzzy evaluation. To determine the weight of factor and sub-factor sets, the modified trapezoidal numbers are proposed to reduce errors caused by subjective factors. To decrease the fuzziness in fuzzy division, an approximation method based on linear programming is employed. To compute the explicit values of fuzzy numbers, centroid method of defuzzification is considered. An example is provided to illustrate the application of the proposed reliability allocation method based on fuzzy arithmetic.
Keywords: Reliability allocation, fuzzy arithmetic, allocation weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33291100 A Comparative Study of High Order Rotated Group Iterative Schemes on Helmholtz Equation
Authors: Norhashidah Hj. Mohd Ali, Teng Wai Ping
Abstract:
In this paper, we present a high order group explicit method in solving the two dimensional Helmholtz equation. The presented method is derived from a nine-point fourth order finite difference approximation formula obtained from a 45-degree rotation of the standard grid which makes it possible for the construction of iterative procedure with reduced complexity. The developed method will be compared with the existing group iterative schemes available in literature in terms of computational time, iteration counts, and computational complexity. The comparative performances of the methods will be discussed and reported.Keywords: Explicit group method, finite difference, Helmholtz equation, rotated grid, standard grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11661099 Numerical Solution for Integro-Differential Equations by Using Quartic B-Spline Wavelet and Operational Matrices
Authors: Khosrow Maleknejad, Yaser Rostami
Abstract:
In this paper, Semi-orthogonal B-spline scaling functions and wavelets and their dual functions are presented to approximate the solutions of integro-differential equations.The B-spline scaling functions and wavelets, their properties and the operational matrices of derivative for this function are presented to reduce the solution of integro-differential equations to the solution of algebraic equations. Here we compute B-spline scaling functions of degree 4 and their dual, then we will show that by using them we have better approximation results for the solution of integro-differential equations in comparison with less degrees of scaling functions
Keywords: Integro-differential equations, Quartic B-spline wavelet, Operational matrices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31511098 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model
Authors: Zina Benouaret, Djamil Aissani
Abstract:
In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8871097 Modified Functional Link Artificial Neural Network
Authors: Ashok Kumar Goel, Suresh Chandra Saxena, Surekha Bhanot
Abstract:
In this work, a Modified Functional Link Artificial Neural Network (M-FLANN) is proposed which is simpler than a Multilayer Perceptron (MLP) and improves upon the universal approximation capability of Functional Link Artificial Neural Network (FLANN). MLP and its variants: Direct Linear Feedthrough Artificial Neural Network (DLFANN), FLANN and M-FLANN have been implemented to model a simulated Water Bath System and a Continually Stirred Tank Heater (CSTH). Their convergence speed and generalization ability have been compared. The networks have been tested for their interpolation and extrapolation capability using noise-free and noisy data. The results show that M-FLANN which is computationally cheap, performs better and has greater generalization ability than other networks considered in the work.Keywords: DLFANN, FLANN, M-FLANN, MLP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18031096 Using Fuzzy Controller in Induction Motor Speed Control with Constant Flux
Authors: Hassan Baghgar Bostan Abad, Ali Yazdian Varjani, Taheri Asghar
Abstract:
Variable speed drives are growing and varying. Drives expanse depend on progress in different part of science like power system, microelectronic, control methods, and so on. Artificial intelligent contains hard computation and soft computation. Artificial intelligent has found high application in most nonlinear systems same as motors drive. Because it has intelligence like human but there are no sentimental against human like angriness and.... Artificial intelligent is used for various points like approximation, control, and monitoring. Because artificial intelligent techniques can use as controller for any system without requirement to system mathematical model, it has been used in electrical drive control. With this manner, efficiency and reliability of drives increase and volume, weight and cost of them decrease.
Keywords: Artificial intelligent, electrical motor, intelligent drive and control,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24841095 An EWMA p Chart Based On Improved Square Root Transformation
Authors: S. Sukparungsee
Abstract:
Generally, the traditional Shewhart p chart has been developed by for charting the binomial data. This chart has been developed using the normal approximation with condition as low defect level and the small to moderate sample size. In real applications, however, are away from these assumptions due to skewness in the exact distribution. In this paper, a modified Exponentially Weighted Moving Average (EWMA) control chat for detecting a change in binomial data by improving square root transformations, namely ISRT p EWMA control chart. The numerical results show that ISRT p EWMA chart is superior to ISRT p chart for small to moderate shifts, otherwise, the latter is better for large shifts.
Keywords: Number of defects, Exponentially Weighted Moving Average, Average Run Length, Square root transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24851094 Effect of Inclusions on the Shape and Size of Crack Tip Plastic Zones by Element Free Galerkin Method
Authors: A. Jameel, G. A. Harmain, Y. Anand, J. H. Masoodi, F. A. Najar
Abstract:
The present study investigates the effect of inclusions on the shape and size of crack tip plastic zones in engineering materials subjected to static loads by employing the element free Galerkin method (EFGM). The modeling of the discontinuities produced by cracks and inclusions becomes independent of the grid chosen for analysis. The standard displacement approximation is modified by adding additional enrichment functions, which introduce the effects of different discontinuities into the formulation. The level set method has been used to represent different discontinuities present in the domain. The effect of inclusions on the extent of crack tip plastic zones is investigated by solving some numerical problems by the EFGM.
Keywords: EFGM, stress intensity factors, crack tip plastic zones, inclusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8861093 Optimization of Three-dimensional Electrical Performance in a Solid Oxide Fuel Cell Stack by a Neural Network
Authors: Shih-Bin Wang, Ping Yuan, Syu-Fang Liu, Ming-Jun Kuo
Abstract:
By the application of an improved back-propagation neural network (BPNN), a model of current densities for a solid oxide fuel cell (SOFC) with 10 layers is established in this study. To build the learning data of BPNN, Taguchi orthogonal array is applied to arrange the conditions of operating parameters, which totally 7 factors act as the inputs of BPNN. Also, the average current densities achieved by numerical method acts as the outputs of BPNN. Comparing with the direct solution, the learning errors for all learning data are smaller than 0.117%, and the predicting errors for 27 forecasting cases are less than 0.231%. The results show that the presented model effectively builds a mathematical algorithm to predict performance of a SOFC stack immediately in real time. Also, the calculating algorithms are applied to proceed with the optimization of the average current density for a SOFC stack. The operating performance window of a SOFC stack is found to be between 41137.11 and 53907.89. Furthermore, an inverse predicting model of operating parameters of a SOFC stack is developed here by the calculating algorithms of the improved BPNN, which is proved to effectively predict operating parameters to achieve a desired performance output of a SOFC stack.Keywords: a SOFC stack, BPNN, inverse predicting model of operating parameters, optimization of the average current density
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13641092 A Special Algorithm to Approximate the Square Root of Positive Integer
Authors: Hsian Ming Goo
Abstract:
The paper concerns a special approximate algorithm of the square root of the specific positive integer, which is built by the use of the property of positive integer solution of the Pell’s equation, together with using some elementary theorems of matrices, and then takes it to compare with general used the Newton’s method and give a practical numerical example and error analysis; it is unexpected to find its special property: the significant figure of the approximation value of the square root of positive integer will increase one digit by one. It is well useful in some occasions.
Keywords: Special approximate algorithm, square root, Pell’s equation, Newton’s method, error analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28021091 Robust Face Recognition using AAM and Gabor Features
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seoungseon Jeon, Jaemin Kim, Seongwon Cho
Abstract:
In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM.Keywords: Face Recognition, AAM, Gabor features, EBGM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22061090 Frequency-Energy Characteristics of Local Earthquakes using Discrete Wavelet Transform(DWT)
Authors: O. H. Colak, T. C. Destici, S. Ozen, H. Arman, O. Cerezci
Abstract:
The wavelet transform is one of the most important method used in signal processing. In this study, we have introduced frequency-energy characteristics of local earthquakes using discrete wavelet transform. Frequency-energy characteristic was analyzed depend on difference between P and S wave arrival time and noise within records. We have found that local earthquakes have similar characteristics. If frequency-energy characteristics can be found accurately, this gives us a hint to calculate P and S wave arrival time. It can be seen that wavelet transform provides successful approximation for this. In this study, 100 earthquakes with 500 records were analyzed approximately.Keywords: Discrete Wavelet Transform, Frequency-EnergyCharacteristics, P and S waves arrival time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22691089 A Pull-out Fiber/Matrix Interface Characterization of Vegetal Fibers Reinforced Thermoplastic Polymer Composites: The Influence of the Processing Temperature
Authors: Duy Cuong Nguyen, Ali Makke, Guillaume Montay
Abstract:
This work presents an improved single fiber pull-out test for fiber/matrix interface characterization. This test has been used to study the Inter-Facial Shear Strength ‘IFSS’ of hemp fibers reinforced polypropylene (PP). For this aim, the fiber diameter has been carefully measured using a tomography inspired method. The fiber section contour can then be approximated by a circle or a polygon. The results show that the IFSS is overestimated if the circular approximation is used. The Influence of the molding temperature on the IFSS has also been studied. We find that a molding temperature of 183◦C leads to better interfacial properties. Above or below this temperature the interface strength is reduced.Keywords: Interface, pull-out, processing, temperature, hemp, polypropylene, composite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20971088 Rank-Based Chain-Mode Ensemble for Binary Classification
Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu
Abstract:
In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.
Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7341087 The Estimate Rate of Permanent Flow of a Liquid Simulating Blood by Doppler Effect
Authors: Malika.D Kedir-Talha, Mohammed Mehenni
Abstract:
To improve the characterization of blood flows, we propose a method which makes it possible to use the spectral analysis of the Doppler signals. Our calculation induces a reasonable approximation, the error made on estimated speed reflects the fact that speed depends on the flow conditions as well as on measurement parameters like the bore and the volume flow rate. The estimate of the Doppler signal frequency enables us to determine the maximum Doppler frequencie Fd max as well as the maximum flow speed. The results show that the difference between the estimated frequencies ( Fde ) and the Doppler frequencies ( Fd ) is small, this variation tends to zero for important θ angles and it is proportional to the diameter D. The description of the speed of friction and the coefficient of friction justify the error rate obtained.Keywords: Doppler frequency, Doppler spectrum, estimate speed, permanent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13401086 Normalizing Logarithms of Realized Volatility in an ARFIMA Model
Authors: G. L. C. Yap
Abstract:
Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.
Keywords: Long-memory, Gaussian process, Whittle estimator, normalization, volatility, value-at-risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16881085 Investigation of VMAT Algorithms and Dosimetry
Authors: A. Taqaddas
Abstract:
Purpose: Planning and dosimetry of different VMAT algorithms (SmartArc, Ergo++, Autobeam) is compared with IMRT for Head and Neck Cancer patients. Modelling was performed to rule out the causes of discrepancies between planned and delivered dose. Methods: Five HNC patients previously treated with IMRT were re-planned with SmartArc (SA), Ergo++ and Autobeam. Plans were compared with each other and against IMRT and evaluated using DVHs for PTVs and OARs, delivery time, monitor units (MU) and dosimetric accuracy. Modelling of control point (CP) spacing, Leaf-end Separation and MLC/Aperture shape was performed to rule out causes of discrepancies between planned and delivered doses. Additionally estimated arc delivery times, overall plan generation times and effect of CP spacing and number of arcs on plan generation times were recorded. Results: Single arc SmartArc plans (SA4d) were generally better than IMRT and double arc plans (SA2Arcs) in terms of homogeneity and target coverage. Double arc plans seemed to have a positive role in achieving improved Conformity Index (CI) and better sparing of some Organs at Risk (OARs) compared to Step and Shoot IMRT (ss-IMRT) and SA4d. Overall Ergo++ plans achieved best CI for both PTVs. Dosimetric validation of all VMAT plans without modelling was found to be lower than ss-IMRT. Total MUs required for delivery were on average 19%, 30%, 10.6% and 6.5% lower than ss-IMRT for SA4d, SA2d (Single arc with 20 Gantry Spacing), SA2Arcs and Autobeam plans respectively. Autobeam was most efficient in terms of actual treatment delivery times whereas Ergo++ plans took longest to deliver. Conclusion: Overall SA single arc plans on average achieved best target coverage and homogeneity for both PTVs. SA2Arc plans showed improved CI and some OARs sparing. Very good dosimetric results were achieved with modelling. Ergo++ plans achieved best CI. Autobeam resulted in fastest treatment delivery times.
Keywords: Dosimetry, Intensity Modulated Radiotherapy, Optimization Algorithms, Volumetric Modulated Arc Therapy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33161084 Modeling and Simulation for 3D Eddy Current Testing in Conducting Materials
Authors: S. Bennoud, M. Zergoug
Abstract:
The numerical simulation of electromagnetic interactions is still a challenging problem, especially in problems that result in fully three dimensional mathematical models.
The goal of this work is to use mathematical modeling to characterize the reliability and capacity of eddy current technique to detect and characterize defects embedded in aeronautical in-service pieces.
The finite element method is used for describing the eddy current technique in a mathematical model by the prediction of the eddy current interaction with defects. However, this model is an approximation of the full Maxwell equations.
In this study, the analysis of the problem is based on a three dimensional finite element model that computes directly the electromagnetic field distortions due to defects.
Keywords: Eddy current, Finite element method, Non destructive testing, Numerical simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31411083 An Amalgam Approach for DICOM Image Classification and Recognition
Authors: J. Umamaheswari, G. Radhamani
Abstract:
This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.
Keywords: Recognition, classification, Relaxed Median Filter, Adaptive thresholding, clustering and Neural Networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22591082 Analytical Solutions for Corotational Maxwell Model Fluid Arising in Wire Coating inside a Canonical Die
Authors: Muhammad Sohail Khan, Rehan Ali Shah
Abstract:
The present paper applies the optimal homotopy perturbation method (OHPM) and the optimal homotopy asymptotic method (OHAM) introduced recently to obtain analytic approximations of the non-linear equations modeling the flow of polymer in case of wire coating of a corotational Maxwell fluid. Expression for the velocity field is obtained in non-dimensional form. Comparison of the results obtained by the two methods at different values of non-dimensional parameter l10, reveal that the OHPM is more effective and easy to use. The OHPM solution can be improved even working in the same order of approximation depends on the choices of the auxiliary functions.Keywords: Wire coating die, Corotational Maxwell model, optimal homotopy asymptotic method, optimal homotopy perturbation method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10501081 Determination of Non Uniform Sinusoidal Microstrip Leaky-Wave Antenna Radiating Performances in Millimeter Band
Authors: Zahéra Mekkioui
Abstract:
Here we have considered non uniform microstrip leaky-wave antenna implemented on a dielectric waveguide by a sinusoidal profile of periodic metallic grating. The non distribution of the attenuation constant α along propagation axis, optimize the radiating characteristics and performances of such antennas. The method developped here is based on an integral method where the formalism of the admittance operator is combined to a BKW approximation. First, the effect of the modeling in the modal analysis of complex waves is studied in detail. Then, the BKW model is used for the dispersion analysis of the antenna of interest. According to antenna theory, a forced continuity of the leaky-wave magnitude at discontinuities of the non uniform structure is established. To test the validity of our dispersion analysis, computed radiation patterns are presented and compared in the millimeter band.Keywords: antenna, leaky-wave, performances, sinusoidal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17721080 Evolutionary Approach for Automated Discovery of Censored Production Rules
Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh
Abstract:
In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18811079 Split-Pipe Design of Water Distribution Networks Using a Combination of Tabu Search and Genetic Algorithm
Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura
Abstract:
In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.
Keywords: GAs, Heuristics, Looped network, Least-cost design, Pipe network, Optimization, TS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17881078 Analytical Model for Brine Discharges from a Sea Outfall with Multiport Diffusers
Authors: Anton Purnama
Abstract:
Multiport diffusers are the effective engineering devices installed at the modern marine outfalls for the steady discharge of effluent streams from the coastal plants, such as municipal sewage treatment, thermal power generation and seawater desalination. A mathematical model using a two-dimensional advection-diffusion equation based on a flat seabed and incorporating the effect of a coastal tidal current is developed to calculate the compounded concentration following discharges of desalination brine from a sea outfall with multiport diffusers. The analytical solutions are computed graphically to illustrate the merging of multiple brine plumes in shallow coastal waters, and further approximation will be made to the maximum shoreline's concentration to formulate dilution of a multiport diffuser discharge.Keywords: Desalination brine discharge, mathematical model, multiport diffuser, two sea outfalls.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995