Search results for: Optimization techniques
2440 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20532439 Evaluating the Radiation Dose Involved in Interventional Radiology Procedures
Authors: Kholood Baron
Abstract:
Radiologic interventional studies use fluoroscopy imaging guidance to perform both diagnostic and therapeutic procedures. These could result in high radiation doses being delivered to the patients and also to the radiology team. This is due to the prolonged fluoroscopy time and the large number of images taken, even when dose-minimizing techniques and modern fluoroscopic tools are applied. Hence, these procedures are part of the everyday routine of interventional radiology doctors, assistant nurses, and radiographers. Thus, it is important to estimate the radiation exposure dose they received in order to give objective advice and reduce both patient and radiology team radiation exposure dose. The aim of this study was to find out the total radiation dose reaching the radiologist and the patient during an interventional procedure, and to determine the impact of certain parameters on the patient dose. The radiation dose was measured by TLD devices (Thermoluminescent Dosimeter; radiation dosimeter device). Physicians, patients, nurses, and radiographers wore TLDs during 12 interventional radiology procedures performed in two hospitals, Mubarak and Chest Hospital. This study highlights the need for interventional radiologists to be mindful of the radiation doses received by both patients and medical staff during interventional radiology procedures. The findings emphasize the impact of factors such as fluoroscopy duration and the number of images taken on the patient dose. By raising awareness and providing insights into optimizing techniques and protective measures, this research contributes to the overall goal of reducing radiation doses and ensuring the safety of patients and medical staff.
Keywords: Dosimetry, radiation dose, interventional radiology procedures, patient radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 852438 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.
Keywords: Admissions, algorithms, cloud computing, differentiation, fog computing, leveling, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7242437 Design of a Reduced Order Robust Convex Controller for Flight Control System
Authors: S. Swain, P. S. Khuntia
Abstract:
In this paper an optimal convex controller is designed to control the angle of attack of a FOXTROT aircraft. Then the order of the system model is reduced to a low-dimensional state space by using Balanced Truncation Model Reduction Technique and finally the robust stability of the reduced model of the system is tested graphically by using Kharitonov rectangle and Zero Exclusion Principle for a particular range of perturbation value. The same robust stability is tested theoretically by using Frequency Sweeping Function for robust stability.
Keywords: Convex Optimization, Kharitonov Stability Criterion, Model Reduction, Robust Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17202436 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations
Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay
Abstract:
Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.
Keywords: Tool condition monitoring, tool wear prediction, milling operation, flute tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16612435 An Approach for Optimization of Functions and Reducing the Value of the Product by Using Virtual Models
Authors: A. Bocevska, G. Todorov, T. Neshkov
Abstract:
New developed approach for Functional Cost Analysis (FCA) based on virtual prototyping (VP) models in CAD/CAE environment, applicable and necessary in developing new products is presented. It is instrument for improving the value of the product while maintaining costs and/or reducing the costs of the product without reducing value. Five broad classes of VP methods are identified. Efficient use of prototypes in FCA is a vital activity that can make the difference between successful and unsuccessful entry of new products into the competitive word market. Successful realization of this approach is illustrated for a specific example using press joint power tool.
Keywords: CAD/CAE environment, Functional Cost Analysis (FCA), Virtual prototyping (VP) models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13332434 Efficient Microspore Isolation Methods for High Yield Embryoids and Regeneration in Rice (Oryza sativa L.)
Authors: S. M. Shahinul Islam, Israt Ara, Narendra Tuteja, Sreeramanan Subramaniam
Abstract:
Through anther and microspore culture methods, complete homozygous plants can be produced within a year as compared to the long inbreeding method. Isolated microspore culture is one of the most important techniques for rapid development of haploid plants. The efficiency of this method is influenced by several factors such as cultural conditions, growth regulators, plant media, pretreatments, physical and growth conditions of the donor plants, pollen isolation procedure, etc. The main purpose of this study was to improve the isolated microspore culture protocol in order to increase the efficiency of embryoids, its regeneration and reducing albinisms. Under this study we have tested mainly three different microspore isolation procedures by glass rod, homozeniger and by blending and found the efficiency on gametic embryogenesis. There are three types of media viz. washing, pre-culture and induction was used. The induction medium as AMC (modified MS) supplemented by 2, 4-D (2.5 mg/l), kinetin (0.5 mg/l) and higher amount of D-Manitol (90 g/l) instead of sucrose and two types of amino acids (L-glutamine and L-serine) were used. Out of three main microspore isolation procedure by homogenizer isolation (P4) showed best performance on ELS induction (177%) and green plantlets (104%) compared with other techniques. For all cases albinisims occurred but microspore isolation from excised anthers by glass rod and homogenizer showed lesser numbers of albino plants that was also one of the important findings in this study.
Keywords: Androgenesis, pretreatment, microspore culture, regeneration, albino plants, Oryza sativa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41332433 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design
Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino
Abstract:
Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.Keywords: Reverse engineering, Moiré technique, three dimensional image generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34572432 Singularity Loci of Actuation Schemes for 3RRR Planar Parallel Manipulator
Authors: S. Ramana Babu, V. Ramachandra Raju, K. Ramji
Abstract:
This paper presents the effect of actuation schemes on the performance of parallel manipulators and also how the singularity loci have been changed in the reachable workspace of the manipulator with the choice of actuation scheme to drive the manipulator. The performance of the eight possible actuation schemes of 3RRR planar parallel manipulator is compared with each other. The optimal design problem is formulated to find the manipulator geometry that maximizes the singularity free conditioned workspace for all the eight actuation cases, the optimization problem is solved by using genetic algorithms.Keywords: Actuation schemes, GCI, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16272431 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques
Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson
Abstract:
Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16432430 Probability Density Estimation Using Advanced Support Vector Machines and the Expectation Maximization Algorithm
Authors: Refaat M Mohamed, Ayman El-Baz, Aly A. Farag
Abstract:
This paper presents a new approach for the prob-ability density function estimation using the Support Vector Ma-chines (SVM) and the Expectation Maximization (EM) algorithms.In the proposed approach, an advanced algorithm for the SVM den-sity estimation which incorporates the Mean Field theory in the learning process is used. Instead of using ad-hoc values for the para-meters of the kernel function which is used by the SVM algorithm,the proposed approach uses the EM algorithm for an automatic optimization of the kernel. Experimental evaluation using simulated data set shows encouraging results.
Keywords: Density Estimation, SVM, Learning Algorithms, Parameters Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25062429 STRPRO Tool for Manipulation of Stratified Programs Based on SEPN
Authors: Chadlia Jerad, Amel Grissa-Touzi, Habib Ounelli
Abstract:
Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. SEPN nets are well adapted extension of predicate nets for the definition and manipulation of stratified programs. This formalism is characterized by two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimization (maximal stratification, incremental updates ...). We propose, in this paper, useful algorithms for manipulating stratified programs using SEPN. These algorithms were implemented and validated with STRPRO tool.
Keywords: stratified programs, update operations, SEPN formalism, algorithms, STRPRO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12662428 On a New Numerical Analysis for the Symmetric Shortest Queue Problem
Authors: Tayeb Lardjane, Rabah Messaci
Abstract:
We consider a network of two M/M/1 parallel queues having the same poisonnian arrival stream with rate λ. Upon his arrival to the system a customer heads to the shortest queue and stays until being served. If the two queues have the same length, an arriving customer chooses one of the two queues with the same probability. Each duration of service in the two queues is an exponential random variable with rate μ and no jockeying is permitted between the two queues. A new numerical method, based on linear programming and convex optimization, is performed for the computation of the steady state solution of the system.
Keywords: Steady state solution, matrix formulation, convex set, shortest queue, linear programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14862427 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences
Authors: Mahmoud M. S. Albattah
Abstract:
In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15672426 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15292425 Cyber Fraud Schemes: Modus Operandi, Tools and Techniques, and the Role of European Legislation as a Defense Strategy
Authors: Papathanasiou Anastasios, Liontos George, Liagkou Vasiliki, Glavas Euripides
Abstract:
The purpose of this paper is to describe the growing problem of various cyber fraud schemes that exist on the internet and are currently among the most prevalent. The main focus of this paper is to provide a detailed description of the modus operandi, tools, and techniques utilized in four basic typologies of cyber frauds: Business Email Compromise (BEC) attacks, investment fraud, romance scams, and online sales fraud. The paper aims to shed light on the methods employed by cybercriminals in perpetrating these types of fraud, as well as the strategies they use to deceive and victimize individuals and businesses on the internet. Furthermore, this study outlines defense strategies intended to tackle the issue head-on, with a particular emphasis on the crucial role played by European legislation. European legislation has proactively adapted to the evolving landscape of cyber fraud, striving to enhance cybersecurity awareness, bolster user education, and implement advanced technical controls to mitigate associated risks. The paper evaluates the advantages and innovations brought about by the European legislation while also acknowledging potential flaws that cybercriminals might exploit. As a result, recommendations for refining the legislation are offered in this study in order to better address this pressing issue.
Keywords: Business email compromise, cybercrime, European legislation, investment fraud, Network and Information Security, online sales fraud, romance scams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982424 Description of Kinetics of Propane Fragmentation with a Support of Ab Initio Simulation
Authors: Amer Al Mahmoud Alsheikh, Jan Žídek, František Krčma
Abstract:
Using ab initio theoretical calculations, we present analysis of fragmentation process. The analysis is performed in two steps. The first step is calculation of fragmentation energies by ab initio calculations. The second step is application of the energies to kinetic description of process. The energies of fragments are presented in this paper. The kinetics of fragmentation process can be described by numerical models. The method for kinetic analysis is described in this paper. The result - composition of fragmentation products - will be calculated in future. The results from model can be compared to the concentrations of fragments from mass spectrum.Keywords: Ab initio, Density functional theory, Fragmentation energy, Geometry optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18942423 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques
Authors: S. Visetpotjanakit, C. Khrautongkieo
Abstract:
Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.
Keywords: International atomic energy agency, proficiency test, radiation monitoring, seawater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8232422 Transferring Route Plan over Time
Authors: Barıs Kocer, Ahmet Arslan
Abstract:
Travelling salesman problem (TSP) is a combinational optimization problem and solution approaches have been applied many real world problems. Pure TSP assumes the cities to visit are fixed in time and thus solutions are created to find shortest path according to these point. But some of the points are canceled to visit in time. If the problem is not time crucial it is not important to determine new routing plan but if the points are changing rapidly and time is necessary do decide a new route plan a new approach should be applied in such cases. We developed a route plan transfer method based on transfer learning and we achieved high performance against determining a new model from scratch in every change.Keywords: genetic algorithms, transfer learning, travellingsalesman problem
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12712421 Stability Criteria for Uncertainty Markovian Jumping Parameters of BAM Neural Networks with Leakage and Discrete Delays
Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong
Abstract:
In this paper, the problem of stability criteria for Markovian jumping BAM neural networks with leakage and discrete delays has been investigated. Some new sufficient condition are derived based on a novel Lyapunov-Krasovskii functional approach. These new criteria based on delay partitioning idea are proved to be less conservative because free-weighting matrices method and a convex optimization approach are considered. Finally, one numerical example is given to illustrate the the usefulness and feasibility of the proposed main results.
Keywords: Stability, Markovian jumping neural networks, Timevarying delays, Linear matrix inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51402420 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm
Authors: Essam Al Daoud
Abstract:
This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.Keywords: Least squares, neighbor joining, phylogenetic tree, wild dogpack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13922419 The Direct Updating of Damping and Gyroscopic Matrices using Incomplete Complex Test Data
Authors: Jiashang Jiang, Yongxin Yuan
Abstract:
In this paper we develop an efficient numerical method for the finite-element model updating of damped gyroscopic systems based on incomplete complex modal measured data. It is assumed that the analytical mass and stiffness matrices are correct and only the damping and gyroscopic matrices need to be updated. By solving a constrained optimization problem, the optimal corrected symmetric damping matrix and skew-symmetric gyroscopic matrix complied with the required eigenvalue equation are found under a weighted Frobenius norm sense.
Keywords: Model updating, damped gyroscopic system, partially prescribed spectral information.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17872418 Optimization of a Triangular Fin with Variable Fin Base Thickness
Authors: Hyung Suk Kang
Abstract:
A triangular fin with variable fin base thickness is analyzed and optimized using a two-dimensional analytical method. The influence of fin base height and fin base thickness on the temperature in the fin is listed. For the fixed fin volumes, the maximum heat loss, the corresponding optimum fin effectiveness, fin base height and fin tip length as a function of the fin base thickness, convection characteristic number and dimensionless fin volume are represented. One of the results shows that the optimum heat loss increases whereas the corresponding optimum fin effectiveness decreases with the increase of fin volume.Keywords: A triangular fin, Convection characteristic number, Heat loss, Fin base thickness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41232417 Fast and Accurate Reservoir Modeling: Genetic Algorithm versus DIRECT Method
Authors: Mohsen Ebrahimi, Milad M. Rabieh
Abstract:
In this paper, two very different optimization algorithms, Genetic and DIRECT algorithms, are used to history match a bottomhole pressure response for a reservoir with wellbore storage and skin with the best possible analytical model. No initial guesses are available for reservoir parameters. The results show that the matching process is much faster and more accurate for DIRECT method in comparison with Genetic algorithm. It is furthermore concluded that the DIRECT algorithm does not need any initial guesses, whereas Genetic algorithm needs to be tuned according to initial guesses.Keywords: DIRECT algorithm, Genetic algorithm, Analytical modeling, History match
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17562416 Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature
Authors: A. M. Alaskari, S. E. Oraby
Abstract:
The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Keywords: Dynamic force signals, surface roughness (finish), tool wear and deformation, tool wear modes (nose, flank)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13492415 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10162414 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.
Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24672413 Auto Regressive Tree Modeling for Parametric Optimization in Fuzzy Logic Control System
Authors: Arshia Azam, J. Amarnath, Ch. D. V. Paradesi Rao
Abstract:
The advantage of solving the complex nonlinear problems by utilizing fuzzy logic methodologies is that the experience or expert-s knowledge described as a fuzzy rule base can be directly embedded into the systems for dealing with the problems. The current limitation of appropriate and automated designing of fuzzy controllers are focused in this paper. The structure discovery and parameter adjustment of the Branched T-S fuzzy model is addressed by a hybrid technique of type constrained sparse tree algorithms. The simulation result for different system model is evaluated and the identification error is observed to be minimum.Keywords: Fuzzy logic, branch T-S fuzzy model, tree modeling, complex nonlinear system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13892412 An Improved Genetic Algorithm to Solve the Traveling Salesman Problem
Authors: Omar M. Sallabi, Younis El-Haddad
Abstract:
The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Keywords: AI, Genetic algorithms, TSP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26132411 Elaboration and Optimization of Pellets Used for Precise Glass Grinding
Authors: N. Belkhir, A. Chorfa, D. Bouzid
Abstract:
In this work, grinding or microcutting tools in the form of pellets were manufactured using a bounded alumina abrasive grains. The bound used is a vitreous material containing quartz feldspars, kaolinite and a quantity of hematite. The pellets were used in glass grinding process to replace the free abrasive grains lapping process. The study of the elaborated pellets were done to define their effectiveness in the grinding process and to optimize the influence of the pellets elaboration parameters. The obtained results show the existence of an optimal combination of the pellets elaboration parameters for each glass grinding phase (coarse to fine grinding). The final roughness (rms) reached by the elaborated pellets on a BK7 glass surface was about 0.392 μm.
Keywords: Abrasive grain, glass, grinding, pellet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1578