Search results for: Decision tree modeling
3601 Causal Modeling of the Glucose-Insulin System in Type-I Diabetic Patients
Authors: J. Fernandez, N. Aguilar, R. Fernandez de Canete, J. C. Ramos-Diaz
Abstract:
In this paper, a simulation model of the glucose-insulin system for a patient undergoing diabetes Type 1 is developed by using a causal modeling approach under system dynamics. The OpenModelica simulation environment has been employed to build the so called causal model, while the glucose-insulin model parameters were adjusted to fit recorded mean data of a diabetic patient database. Model results under different conditions of a three-meal glucose and exogenous insulin ingestion patterns have been obtained. This simulation model can be useful to evaluate glucose-insulin performance in several circumstances, including insulin infusion algorithms in open-loop and decision support systems in closed-loop.
Keywords: Causal modeling, diabetes, glucose-insulin system, diabetes, causal modeling, OpenModelica software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14253600 Fuzzy Shortest Paths Approximation for Solving the Fuzzy Steiner Tree Problem in Graphs
Authors: Miloš Šeda
Abstract:
In this paper, we deal with the Steiner tree problem (STP) on a graph in which a fuzzy number, instead of a real number, is assigned to each edge. We propose a modification of the shortest paths approximation based on the fuzzy shortest paths (FSP) evaluations. Since a fuzzy min operation using the extension principle leads to nondominated solutions, we propose another approach to solving the FSP using Cheng's centroid point fuzzy ranking method.Keywords: Steiner tree, single shortest path problem, fuzzyranking, binary heap, priority queue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16943599 Using Spectral Vectors and M-Tree for Graph Clustering and Searching in Graph Databases of Protein Structures
Authors: Do Phuc, Nguyen Thi Kim Phung
Abstract:
In this paper, we represent protein structure by using graph. A protein structure database will become a graph database. Each graph is represented by a spectral vector. We use Jacobi rotation algorithm to calculate the eigenvalues of the normalized Laplacian representation of adjacency matrix of graph. To measure the similarity between two graphs, we calculate the Euclidean distance between two graph spectral vectors. To cluster the graphs, we use M-tree with the Euclidean distance to cluster spectral vectors. Besides, M-tree can be used for graph searching in graph database. Our proposal method was tested with graph database of 100 graphs representing 100 protein structures downloaded from Protein Data Bank (PDB) and we compare the result with the SCOP hierarchical structure.Keywords: Eigenvalues, m-tree, graph database, protein structure, spectra graph theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16553598 CART Method for Modeling the Output Power of Copper Bromide Laser
Authors: Iliycho P. Iliev, Desislava S. Voynikova, Snezhana G. Gocheva-Ilieva
Abstract:
This paper examines the available experiment data for a copper bromide vapor laser (CuBr laser), emitting at two wavelengths - 510.6 and 578.2nm. Laser output power is estimated based on 10 independent input physical parameters. A classification and regression tree (CART) model is obtained which describes 97% of data. The resulting binary CART tree specifies which input parameters influence considerably each of the classification groups. This allows for a technical assessment that indicates which of these are the most significant for the manufacture and operation of the type of laser under consideration. The predicted values of the laser output power are also obtained depending on classification. This aids the design and development processes considerably.
Keywords: Classification and regression trees (CART), Copper Bromide laser (CuBr laser), laser generation, nonparametric statistical model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18233597 Dynamic-Stochastic Influence Diagrams: Integrating Time-Slices IDs and Discrete Event Systems Modeling
Authors: Xin Zhao, Yin-fan Zhu, Wei-ping Wang, Qun Li
Abstract:
The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.
Keywords: Time-sliced influence diagrams, discrete event systems, dynamic-stochastic influence diagrams, modeling formalism, simulation algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14313596 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern
Authors: R. Vishnu Priya, A. Vadivel
Abstract:
Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.
Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19303595 Freighter Aircraft Selection Using Entropic Programming for Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
This paper proposes entropic programming for the freighter aircraft selection problem using the multiple criteria decision analysis method. The study aims to propose a systematic and comprehensive framework by focusing on the perspective of freighter aircraft selection. In order to achieve this goal, an integrated entropic programming approach was proposed to evaluate and rank alternatives. The decision criteria and aircraft alternatives were identified from the research data analysis. The objective criteria weights were determined by the mean weight method and the standard deviation method. The proposed entropic programming model was applied to a practical decision problem for evaluating and selecting freighter aircraft. The proposed entropic programming technique gives robust, reliable, and efficient results in modeling decision making analysis problems. As a result of entropic programming analysis, Boeing B747-8F, a freighter aircraft alternative ( a3), was chosen as the most suitable freighter aircraft candidate.
Keywords: entropic programming, additive weighted model, multiple criteria decision making analysis, MCDMA, TOPSIS, aircraft selection, freighter aircraft, Boeing B747-8F, Boeing B777F, Airbus A350F
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5493594 A Novel Methodology for Synthesis of Fault Trees from MATLAB-Simulink Model
Authors: F. Tajarrod, G. Latif-Shabgahi
Abstract:
Fault tree analysis is a well-known method for reliability and safety assessment of engineering systems. In the last 3 decades, a number of methods have been introduced, in the literature, for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.Keywords: Fault tree, Simulink, Standby Sparing and Redundancy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30003593 Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA
Authors: Suguru Ochiai, Toru Yabuki, Yoshiki Yamaguchi, Yuetsu Kodama
Abstract:
In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.Keywords: Connect6, pattern matching, game-tree reduction, hardware direct computation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19733592 Pattern Matching Based on Regular Tree Grammars
Authors: Riad S. Jabri
Abstract:
Pattern matching based on regular tree grammars have been widely used in many areas of computer science. In this paper, we propose a pattern matcher within the framework of code generation, based on a generic and a formalized approach. According to this approach, parsers for regular tree grammars are adapted to a general pattern matching solution, rather than adapting the pattern matching according to their parsing behavior. Hence, we first formalize the construction of the pattern matches respective to input trees drawn from a regular tree grammar in a form of the so-called match trees. Then, we adopt a recently developed generic parser and tightly couple its parsing behavior with such construction. In addition to its generality, the resulting pattern matcher is characterized by its soundness and efficient implementation. This is demonstrated by the proposed theory and by the derived algorithms for its implementation. A comparison with similar and well-known approaches, such as the ones based on tree automata and LR parsers, has shown that our pattern matcher can be applied to a broader class of grammars, and achieves better approximation of pattern matches in one pass. Furthermore, its use as a machine code selector is characterized by a minimized overhead, due to the balanced distribution of the cost computations into static ones, during parser generation time, and into dynamic ones, during parsing time.
Keywords: Bottom-up automata, Code selection, Pattern matching, Regular tree grammars, Match trees.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12683591 Learning Monte Carlo Data for Circuit Path Length
Authors: Namal A. Senanayake, A. Beg, Withana C. Prasad
Abstract:
This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.Keywords: Monte Carlo data, Binary decision diagrams, Neural network modeling, Shortest path length estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15943590 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.
Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14653589 Computing Entropy for Ortholog Detection
Authors: Hsing-Kuo Pao, John Case
Abstract:
Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.
Keywords: compression, decision tree, entropy, ortholog, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18263588 Tree Sign Patterns of Small Order that Allow an Eventually Positive Matrix
Authors: Ber-Lin Yu, Jie Cui, Hong Cheng, Zhengfeng Yu
Abstract:
A sign pattern is a matrix whose entries belong to the set {+,−, 0}. An n-by-n sign pattern A is said to allow an eventually positive matrix if there exist some real matrices A with the same sign pattern as A and a positive integer k0 such that Ak > 0 for all k ≥ k0. It is well known that identifying and classifying the n-by-n sign patterns that allow an eventually positive matrix are posed as two open problems. In this article, the tree sign patterns of small order that allow an eventually positive matrix are classified completely.Keywords: Eventually positive matrix, sign pattern, tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12673587 Comparison of Phylogenetic Trees of Multiple Protein Sequence Alignment Methods
Authors: Khaddouja Boujenfa, Nadia Essoussi, Mohamed Limam
Abstract:
Multiple sequence alignment is a fundamental part in many bioinformatics applications such as phylogenetic analysis. Many alignment methods have been proposed. Each method gives a different result for the same data set, and consequently generates a different phylogenetic tree. Hence, the chosen alignment method affects the resulting tree. However in the literature, there is no evaluation of multiple alignment methods based on the comparison of their phylogenetic trees. This work evaluates the following eight aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN, ProbCons and Align-m, based on their phylogenetic trees (test trees) produced on a given data set. The Neighbor-Joining method is used to estimate trees. Three criteria, namely, the dNNI, the dRF and the Id_Tree are established to test the ability of different alignment methods to produce closer test tree compared to the reference one (true tree). Results show that the method which produces the most accurate alignment gives the nearest test tree to the reference tree. MUSCLE outperforms all aligners with respect to the three criteria and for all datasets, performing particularly better when sequence identities are within 10-20%. It is followed by T-Coffee at lower sequence identity (<10%), Align-m at 20-30% identity, and ClustalX and ProbCons at 30-50% identity. Also, it is noticed that when sequence identities are higher (>30%), trees scores of all methods become similar.Keywords: Multiple alignment methods, phylogenetic trees, Neighbor-Joining method, Robinson-Foulds distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18263586 Binary Classification Tree with Tuned Observation-based Clustering
Authors: Maythapolnun Athimethphat, Boontarika Lerteerawong
Abstract:
There are several approaches for handling multiclass classification. Aside from one-against-one (OAO) and one-against-all (OAA), hierarchical classification technique is also commonly used. A binary classification tree is a hierarchical classification structure that breaks down a k-class problem into binary sub-problems, each solved by a binary classifier. In each node, a set of classes is divided into two subsets. A good class partition should be able to group similar classes together. Many algorithms measure similarity in term of distance between class centroids. Classes are grouped together by a clustering algorithm when distances between their centroids are small. In this paper, we present a binary classification tree with tuned observation-based clustering (BCT-TOB) that finds a class partition by performing clustering on observations instead of class centroids. A merging step is introduced to merge any insignificant class split. The experiment shows that performance of BCT-TOB is comparable to other algorithms.
Keywords: multiclass classification, hierarchical classification, binary classification tree, clustering, observation-based clustering
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17303585 Tree Based Decomposition of Sunspot Images
Authors: Hossein Mirzaee, Farhad Besharati
Abstract:
Solar sunspot rotation, latitudinal bands are studied based on intelligent computation methods. A combination of image fusion method with together tree decomposition is used to obtain quantitative values about the latitudes of trajectories on sun surface that sunspots rotate around them. Daily solar images taken with SOlar and Heliospheric (SOHO) satellite are fused for each month separately .The result of fused image is decomposed with Quad Tree decomposition method in order to achieve the precise information about latitudes of sunspot trajectories. Such analysis is useful for gathering information about the regions on sun surface and coordinates in space that is more expose to solar geomagnetic storms, tremendous flares and hot plasma gases permeate interplanetary space and help human to serve their technical systems. Here sunspot images in September, November and October in 2001 are used for studying the magnetic behavior of sun.Keywords: Quad tree decomposition, sunspot image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12493584 E-Learning Methodology Development using Modeling
Authors: Sarma Cakula, Maija Sedleniece
Abstract:
Simulation and modeling computer programs are concerned with construction of models for analyzing different perspectives and possibilities in changing conditions environment. The paper presents theoretical justification and evaluation of qualitative e-learning development model in perspective of advancing modern technologies. There have been analyzed principles of qualitative e-learning in higher education, productivity of studying process using modern technologies, different kind of methods and future perspectives of e-learning in formal education. Theoretically grounded and practically tested model of developing e-learning methods using different technologies for different type of classroom, which can be used in professor-s decision making process to choose the most effective e-learning methods has been worked out.Keywords: E-learning, modeling, E-learning methods development, personal knowledge management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19893583 A Hybrid Scheme for on-Line Diagnostic Decision Making Using Optimal Data Representation and Filtering Technique
Authors: Hyun-Woo Cho
Abstract:
The early diagnostic decision making in industrial processes is absolutely necessary to produce high quality final products. It helps to provide early warning for a special event in a process, and finding its assignable cause can be obtained. This work presents a hybrid diagnostic schmes for batch processes. Nonlinear representation of raw process data is combined with classification tree techniques. The nonlinear kernel-based dimension reduction is executed for nonlinear classification decision boundaries for fault classes. In order to enhance diagnosis performance for batch processes, filtering of the data is performed to get rid of the irrelevant information of the process data. For the diagnosis performance of several representation, filtering, and future observation estimation methods, four diagnostic schemes are evaluated. In this work, the performance of the presented diagnosis schemes is demonstrated using batch process data.
Keywords: Diagnostics, batch process, nonlinear representation, data filtering, multivariate statistical approach
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13163582 A Rapid and Cost-Effective Approach to Manufacturing Modeling Platform for Fused Deposition Modeling
Authors: Chil-Chyuan Kuo, Chen-Hsuan Tsai
Abstract:
This study presents a cost-effective approach for rapid fabricating modeling platforms utilized in fused deposition modeling system. A small-batch production of modeling platforms about 20 pieces can be obtained economically through silicone rubber mold using vacuum casting without applying the plastic injection molding. The air venting systems is crucial for fabricating modeling platform using vacuum casting. Modeling platforms fabricated can be used for building rapid prototyping model after sandblasting. This study offers industrial value because it has both time-effectiveness and cost-effectiveness.
Keywords: Vacuum casting, fused deposition modeling, modeling platform, sandblasting, surface roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24193581 Advanced Jet Trainer and Light Attack Aircraft Selection Using Composite Programming in Multiple Criteria Decision Making Analysis Method
Authors: C. Ardil
Abstract:
In this paper, composite programming is discussed for aircraft evaluation and selection problem using the multiple criteria decision analysis method. The decision criteria and aircraft alternatives were identified from the literature review. The importance of criteria weights was determined by the standard deviation method. The proposed model is applied to a practical decision problem for evaluating and selecting advanced jet trainer and light attack aircraft. The proposed technique gives robust and efficient results in modeling multiple criteria decisions. As a result of composite programming analysis, Hürjet, an advanced jet trainer and light attack aircraft alternative (a3), was chosen as the most suitable aircraft candidate.
Keywords: composite programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection, advanced jet trainer and light attack aircraft, M-346, FA-50, Hürjet
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4883580 Evaluation of Environmental, Technical, and Economic Indicators of a Fused Deposition Modeling Process
Authors: M. Yosofi, S. Ezeddini, A. Ollivier, V. Lavaste, C. Mayousse
Abstract:
Additive manufacturing processes have changed significantly in a wide range of industries and their application progressed from rapid prototyping to production of end-use products. However, their environmental impact is still a rather open question. In order to support the growth of this technology in the industrial sector, environmental aspects should be considered and predictive models may help monitor and reduce the environmental footprint of the processes. This work presents predictive models based on a previously developed methodology for the environmental impact evaluation combined with a technical and economical assessment. Here we applied the methodology to the Fused Deposition Modeling process. First, we present the predictive models relative to different types of machines. Then, we present a decision-making tool designed to identify the optimum manufacturing strategy regarding technical, economic, and environmental criteria.
Keywords: Additive manufacturing, decision-makings, environmental impact, predictive models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10603579 Image Processing Approach for Detection of Three-Dimensional Tree-Rings from X-Ray Computed Tomography
Authors: Jorge Martinez-Garcia, Ingrid Stelzner, Joerg Stelzner, Damian Gwerder, Philipp Schuetz
Abstract:
Tree-ring analysis is an important part of the quality assessment and the dating of (archaeological) wood samples. It provides quantitative data about the whole anatomical ring structure, which can be used, for example, to measure the impact of the fluctuating environment on the tree growth, for the dendrochronological analysis of archaeological wooden artefacts and to estimate the wood mechanical properties. Despite advances in computer vision and edge recognition algorithms, detection and counting of annual rings are still limited to 2D datasets and performed in most cases manually, which is a time consuming, tedious task and depends strongly on the operator’s experience. This work presents an image processing approach to detect the whole 3D tree-ring structure directly from X-ray computed tomography imaging data. The approach relies on a modified Canny edge detection algorithm, which captures fully connected tree-ring edges throughout the measured image stack and is validated on X-ray computed tomography data taken from six wood species.
Keywords: Ring recognition, edge detection, X-ray computed tomography, dendrochronology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8063578 Using Time-Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: A. S. Adesuyi, Z. Munch
Abstract:
This study investigates the use of a time-series of MODIS NDVI data to identify agricultural land cover change on an annual time step (2007 - 2012) and characterize the trend. Following an ISODATA classification of the MODIS imagery to selectively mask areas not agriculture or semi-natural, NDVI signatures were created to identify areas cereals and vineyards with the aid of ancillary, pictometry and field sample data for 2010. The NDVI signature curve and training samples were used to create a decision tree model in WEKA 3.6.9 using decision tree classifier (J48) algorithm; Model 1 including ISODATA classification and Model 2 not. These two models were then used to classify all data for the study area for 2010, producing land cover maps with classification accuracies of 77% and 80% for Model 1 and 2 respectively. Model 2 was subsequently used to create land cover classification and change detection maps for all other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices. Over the years as predicted by the land cover classification. Forty one percent of the catchment comprised of cereals with 35% possibly following a crop rotation system. Vineyards largely remained constant with only one percent conversion to vineyard from other land cover classes.Keywords: Change detection, Land cover, NDVI, time-series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22893577 Urban and Rural Children’s Knowledge on Biodiversity in Bizkaia: Tree Identification Skills and Animal and Plant Listing
Authors: Joserra Díez, Ainhoa Meñika, Iñaki Sanz-Azkue, Arritokieta Ortuzar
Abstract:
Biodiversity provides humans with a great range of ecosystemic services; it is therefore an indispensable resource and a legacy to coming generations. However, in the last decades, the increasing exploitation of the Planet has caused a great loss of biodiversity and its acquaintance has decreased remarkably; especially in urbanized areas, due to the decreasing attachment of humans to nature. Yet, the Primary Education curriculum primes the identification of flora and fauna to guarantee the knowledge of children on their surroundings, so that they care for the environment as well as for themselves. In order to produce effective didactic material that meets the needs of both teachers and pupils, it is fundamental to diagnose the current situation. In the present work, the knowledge on biodiversity of 3rd cycle Primary Education students in Biscay (n=98) and its relation to the size of the town/city of their school is discussed. Two tests have been used with such aim: one for tree identification and the other one so that the students enumerated the species of trees and animals they knew. Results reveal that knowledge of students on tree identification is scarce regardless the size of the city/town and of their school. On the other hand, animal species are better known than tree species.Keywords: Biodiversity, population, tree identification, animal identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11873576 Modeling the Country Selection Decision in Retail Internationalization
Authors: A. Hortacsu, A. Tektas
Abstract:
This paper aims to develop a model that assists the international retailer in selecting the country that maximizes the degree of fit between the retailer-s goals and the country characteristics in his initial internationalization move. A two-stage multi criteria decision model is designed integrating the Analytic Hierarchy Process (AHP) and Goal Programming. Ethical, cultural, geographic and economic proximity are identified as the relevant constructs of the internationalization decision. The constructs are further structured into sub-factors within analytic hierarchy. The model helps the retailer to integrate, rank and weigh a number of hard and soft factors and prioritize the countries accordingly. The model has been implemented on a Turkish luxury goods retailer who was planning to internationalize. Actual entry of the specific retailer in the selected country is a support for the model. Implementation on a single retailer limits the generalizability of the results; however, the emphasis of the paper is on construct identification and model development. The paper enriches the existing literature by proposing a hybrid multi objective decision model which introduces new soft dimensions i.e. perceived distance, ethical proximity, humane orientation to the decision process and facilitates effective decision making.Keywords: Analytic hierarchy process, culture, ethics, goal programming, retail foreign market selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23403575 Pressure Losses on Realistic Geometry of Tracheobronchial Tree
Authors: Michaela Chovancova, Jakub Elcner
Abstract:
Real bronchial tree is very complicated piping system. Analysis of flow and pressure losses in this system is very difficult. Due to the complex geometry and the very small size in the lower generations is examination by CFD possible only in the central part of bronchial tree. For specify the pressure losses of lower generations is necessary to provide a mathematical equation. Determination of mathematical formulas for calculation of pressure losses in the real lungs is time consuming and inefficient process due to its complexity and diversity. For these calculations is necessary to slightly simplify the geometry of lungs (same cross-section over the length of individual generation) or use one of the idealized models of lungs (Horsfield, Weibel). The article compares the values of pressure losses obtained from CFD simulation of air flow in the central part of the real bronchial tree with the values calculated in a slightly simplified real lungs by using a mathematical relationship derived from the Bernoulli and continuity equations. The aim of the article is to analyse the accuracy of the analytical method and its possibility of use for the calculation of pressure losses in lower generations, which is difficult to solve by numerical method due to the small geometry.
Keywords: Pressure gradient, airways resistance, real geometry of bronchial tree, breathing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18783574 Remote-Sensing Sunspot Images to Obtain the Sunspot Roads
Authors: Hossein Mirzaee, Farhad Besharati
Abstract:
A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.Keywords: Quad Tree Decomposition, Sunspot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12083573 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications
Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber
Abstract:
Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Keywords: Classification, High dimensional data, Machine learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23833572 An Optimized Design of Non-uniform Filterbank
Authors: Ram Kumar Soni, Alok Jain, Rajiv Saxena
Abstract:
The tree structured approach of non-uniform filterbank (NUFB) is normally used in perfect reconstruction (PR). The PR is not always feasible due to certain limitations, i.e, constraints in selecting design parameters, design complexity and some times output is severely affected by aliasing error if necessary and sufficient conditions of PR is not satisfied perfectly. Therefore, there has been generalized interest of researchers to go for near perfect reconstruction (NPR). In this proposed work, an optimized tree structure technique is used for the design of NPR non-uniform filterbank. Window functions of Blackman family are used to design the prototype FIR filter. A single variable linear optimization is used to minimize the amplitude distortion. The main feature of the proposed design is its simplicity with linear phase property.Keywords: Tree structure, NUFB, QMF, NPR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737