Search results for: elemental graph data model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12927

Search results for: elemental graph data model

12747 All-Pairs Shortest-Paths Problem for Unweighted Graphs in O(n2 log n) Time

Authors: Udaya Kumar Reddy K. R, K. Viswanathan Iyer

Abstract:

Given a simple connected unweighted undirected graph G = (V (G), E(G)) with |V (G)| = n and |E(G)| = m, we present a new algorithm for the all-pairs shortest-path (APSP) problem. The running time of our algorithm is in O(n2 log n). This bound is an improvement over previous best known O(n2.376) time bound of Raimund Seidel (1995) for general graphs. The algorithm presented does not rely on fast matrix multiplication. Our algorithm with slight modifications, enables us to compute the APSP problem for unweighted directed graph in time O(n2 log n), improving a previous best known O(n2.575) time bound of Uri Zwick (2002).

Keywords: Distance in graphs, Dynamic programming, Graphalgorithms, Shortest paths.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3693
12746 Zero Inflated Strict Arcsine Regression Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

Zero inflated strict arcsine model is a newly developed model which is found to be appropriate in modeling overdispersed count data. In this study, we extend zero inflated strict arcsine model to zero inflated strict arcsine regression model by taking into consideration the extra variability caused by extra zeros and covariates in count data. Maximum likelihood estimation method is used in estimating the parameters for this zero inflated strict arcsine regression model.

Keywords: Overdispersed count data, maximum likelihood estimation, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
12745 Using Perspective Schemata to Model the ETL Process

Authors: Valeria M. Pequeno, Joao Carlos G. M. Pires

Abstract:

Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.

Keywords: conceptual data model, correspondence assertions, data warehouse, data integration, ETL process, object relational database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
12744 Metabolic Predictive Model for PMV Control Based on Deep Learning

Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon

Abstract:

In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.

Keywords: Deep learning, indoor quality, metabolism, predictive model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1195
12743 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: Transportation network, critical path, connectivity reliability, network model, Neo4J application, optimal path, critical path, edge betweenness centrality index, node betweenness centrality index, Yen’s k-shortest paths.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
12742 Handling Complexity of a Complex System Design: Paradigm, Formalism and Transformations

Authors: Hycham Aboutaleb, Bruno Monsuez

Abstract:

Current systems complexity has reached a degree that requires addressing conception and design issues while taking into account environmental, operational, social, legal and financial aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponential growing effort, cost and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework and an environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and suggests a graphbased formalism for modeling complex systems. Finally, a set of transformations are defined to handle the model complexity.

Keywords: Higraph-based, formalism, system engineering paradigm, modeling requirements, graph-based transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
12741 A Deterministic Polynomial-time Algorithm for the Clique Problem and the Equality of P and NP Complexity Classes

Authors: Zohreh O. Akbari

Abstract:

In this paper a deterministic polynomial-time algorithm is presented for the Clique problem. The case is considered as the problem of omitting the minimum number of vertices from the input graph so that none of the zeroes on the graph-s adjacency matrix (except the main diagonal entries) would remain on the adjacency matrix of the resulting subgraph. The existence of a deterministic polynomial-time algorithm for the Clique problem, as an NP-complete problem will prove the equality of P and NP complexity classes.

Keywords: Clique problem, Deterministic Polynomial-time Algorithm, Equality of P and NP Complexity Classes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
12740 A Formulation of the Latent Class Vector Model for Pairwise Data

Authors: Tomoya Okubo, Kuninori Nakamura, Shin-ichi Mayekawa

Abstract:

In this research, a latent class vector model for pairwise data is formulated. As compared to the basic vector model, this model yields consistent estimates of the parameters since the number of parameters to be estimated does not increase with the number of subjects. The result of the analysis reveals that the model was stable and could classify each subject to the latent classes representing the typical scales used by these subjects.

Keywords: finite mixture models, latent class analysis, Thrustone's paired comparison method, vector model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
12739 Validation of Automation Systems using Temporal Logic Model Checking and Groebner Bases

Authors: Quoc-Nam Tran, Anjib Mulepati

Abstract:

Validation of an automation system is an important issue. The goal is to check if the system under investigation, modeled by a Petri net, never enters the undesired states. Usually, tools dedicated to Petri nets such as DESIGN/CPN are used to make reachability analysis. The biggest problem with this approach is that it is impossible to generate the full occurence graph of the system because it is too large. In this paper, we show how computational methods such as temporal logic model checking and Groebner bases can be used to verify the correctness of the design of an automation system. We report our experimental results with two automation systems: the Automated Guided Vehicle (AGV) system and the traffic light system. Validation of these two systems ranged from 10 to 30 seconds on a PC depending on the optimizing parameters.

Keywords: Computational Intelligence, Temporal Logic Reasoning, Model Checking, Groebner Bases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
12738 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
12737 Evaluating the Innovation Ability of Manufacturing Resources

Authors: M.F. Zaeh, G. Reinhart, U. Lindemann, F. Karl, W. Biedermann

Abstract:

Due to today-s turbulent environment, manufacturing resources, particularly in assembly, must be reconfigured frequently. These reconfigurations are caused by various, partly cyclic, influencing factors. Hence, it is important to evaluate the innovation ability - the capability of resources to implement innovations quickly and efficiently without large expense - of manufacturing resources. For this purpose, a new methodology is presented in this article. Within the methodology, design structure matrices and graph theory are used. The results of the methodology include different indices to evaluate the innovation ability of the manufacturing resources. Due to the cyclicity of the influencing factors, the methodology can be used to synchronize the realization of adaptations.

Keywords: Changeability, Cycle Management, Design StructureMatrices, Graph Theory, Manufacturing Resource Planning, Production Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
12736 The Maximum Likelihood Method of Random Coefficient Dynamic Regression Model

Authors: Autcha Araveeporn

Abstract:

The Random Coefficient Dynamic Regression (RCDR) model is to developed from Random Coefficient Autoregressive (RCA) model and Autoregressive (AR) model. The RCDR model is considered by adding exogenous variables to RCA model. In this paper, the concept of the Maximum Likelihood (ML) method is used to estimate the parameter of RCDR(1,1) model. Simulation results have shown the AIC and BIC criterion to compare the performance of the the RCDR(1,1) model. The variables as the stationary and weakly stationary data are good estimates where the exogenous variables are weakly stationary. However, the model selection indicated that variables are nonstationarity data based on the stationary data of the exogenous variables.

Keywords: Autoregressive, Maximum Likelihood Method, Nonstationarity, Random Coefficient Dynamic Regression, Stationary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
12735 Fish Locomotion for Innovative Marine Propulsion Systems

Authors: Omar B. Yaakob, Yasser M. Ahmed, Ahmad F. Said

Abstract:

There is an essential need for obtaining the mathematical representation of fish body undulations, which can be used for designing and building new innovative types of marine propulsion systems with less environmental impact. This research work presents a case study to derive the mathematical model for fish body movement. Observation and capturing image methods were used in this study in order to obtain a mathematical representation of Clariasbatrachus fish (catfish). An experiment was conducted by using an aquarium with dimension 0.609 m x 0.304 m x 0.304 m, and a 0.5 m ruler was attached at the base of the aquarium. Progressive Scan Monochrome Camera was positioned at 1.8 m above the base of the aquarium to provide swimming sequences. Seven points were marked on the fish body using white marker to indicate the fish movement and measuring the amplitude of undulation. Images from video recordings (20 frames/s) were analyzed frame by frame using local coordinate system, with time interval 0.05 s. The amplitudes of undulations were obtained for image analysis from each point that has been marked on fish body. A graph of amplitude of undulations versus time was plotted by using computer to derive a mathematical fit. The function for the graph is polynomial with nine orders.

Keywords: Fish locomotion, body undulation, steady and unsteady swimming modes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
12734 Human Growth Curve Estimation through a Combination of Longitudinal and Cross-sectional Data

Authors: Sedigheh Mirzaei S., Debasis Sengupta

Abstract:

Parametric models have been quite popular for studying human growth, particularly in relation to biological parameters such as peak size velocity and age at peak size velocity. Longitudinal data are generally considered to be vital for fittinga parametric model to individual-specific data, and for studying the distribution of these biological parameters in a human population. However, cross-sectional data are easier to obtain than longitudinal data. In this paper, we present a method of combining longitudinal and cross-sectional data for the purpose of estimating the distribution of the biological parameters. We demonstrate, through simulations in the special case ofthePreece Baines model, how estimates based on longitudinal data can be improved upon by harnessing the information contained in cross-sectional data.We study the extent of improvement for different mixes of the two types of data, and finally illustrate the use of the method through data collected by the Indian Statistical Institute.

Keywords: Preece-Baines growth model, MCMC method, Mixed effect model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
12733 Likelihood Estimation for Stochastic Epidemics with Heterogeneous Mixing Populations

Authors: Yilun Shang

Abstract:

We consider a heterogeneously mixing SIR stochastic epidemic process in populations described by a general graph. Likelihood theory is developed to facilitate statistic inference for the parameters of the model under complete observation. We show that these estimators are asymptotically Gaussian unbiased estimates by using a martingale central limit theorem.

Keywords: statistic inference, maximum likelihood, epidemicmodel, heterogeneous mixing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
12732 Design of an Ensemble Learning Behavior Anomaly Detection Framework

Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia

Abstract:

Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.

Keywords: Cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1152
12731 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542
12730 WiFi Data Offloading: Bundling Method in a Canvas Business Model

Authors: Majid Mokhtarnia, Alireza Amini

Abstract:

Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.

Keywords: Bundling, canvas business model, telecommunication, WiFi Data Offloading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890
12729 Unified Structured Process for Health Analytics

Authors: Supunmali Ahangama, Danny Chiang Choon Poo

Abstract:

Health analytics (HA) is used in healthcare systems for effective decision making, management and planning of healthcare and related activities. However, user resistances, unique position of medical data content and structure (including heterogeneous and unstructured data) and impromptu HA projects have held up the progress in HA applications. Notably, the accuracy of outcomes depends on the skills and the domain knowledge of the data analyst working on the healthcare data. Success of HA depends on having a sound process model, effective project management and availability of supporting tools. Thus, to overcome these challenges through an effective process model, we propose a HA process model with features from rational unified process (RUP) model and agile methodology.

Keywords: Agile methodology, health analytics, unified process model, UML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
12728 An Alternative Proof for the NP-completeness of Top Right Access point-Minimum Length Corridor Problem

Authors: Priyadarsini P.L.K, Hemalatha T.

Abstract:

In the Top Right Access point Minimum Length Corridor (TRA-MLC) problem [1], a rectangular boundary partitioned into rectilinear polygons is given and the problem is to find a corridor of least total length and it must include the top right corner of the outer rectangular boundary. A corridor is a tree containing a set of line segments lying along the outer rectangular boundary and/or on the boundary of the rectilinear polygons. The corridor must contain at least one point from the boundaries of the outer rectangle and also the rectilinear polygons. Gutierrez and Gonzalez [1] proved that the MLC problem, along with some of its restricted versions and variants, are NP-complete. In this paper, we give a shorter proof of NP-Completeness of TRA-MLC by findig the reduction in the following way.

Keywords: NP-complete, 2-connected planar graph, Grid embedding of a plane graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
12727 Evaluation of Graph-based Analysis for Forest Fire Detections

Authors: Young Gi Byun, Yong Huh, Kiyun Yu, Yong Il Kim

Abstract:

Spatial outliers in remotely sensed imageries represent observed quantities showing unusual values compared to their neighbor pixel values. There have been various methods to detect the spatial outliers based on spatial autocorrelations in statistics and data mining. These methods may be applied in detecting forest fire pixels in the MODIS imageries from NASA-s AQUA satellite. This is because the forest fire detection can be referred to as finding spatial outliers using spatial variation of brightness temperature. This point is what distinguishes our approach from the traditional fire detection methods. In this paper, we propose a graph-based forest fire detection algorithm which is based on spatial outlier detection methods, and test the proposed algorithm to evaluate its applicability. For this the ordinary scatter plot and Moran-s scatter plot were used. In order to evaluate the proposed algorithm, the results were compared with the MODIS fire product provided by the NASA MODIS Science Team, which showed the possibility of the proposed algorithm in detecting the fire pixels.

Keywords: Spatial Outlier Detection, MODIS, Forest Fire

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2226
12726 An efficient Activity Network Reduction Algorithm based on the Label Correcting Tracing Algorithm

Authors: Weng Ming Chu

Abstract:

When faced with stochastic networks with an uncertain duration for their activities, the securing of network completion time becomes problematical, not only because of the non-identical pdf of duration for each node, but also because of the interdependence of network paths. As evidenced by Adlakha & Kulkarni [1], many methods and algorithms have been put forward in attempt to resolve this issue, but most have encountered this same large-size network problem. Therefore, in this research, we focus on network reduction through a Series/Parallel combined mechanism. Our suggested algorithm, named the Activity Network Reduction Algorithm (ANRA), can efficiently transfer a large-size network into an S/P Irreducible Network (SPIN). SPIN can enhance stochastic network analysis, as well as serve as the judgment of symmetry for the Graph Theory.

Keywords: Series/Parallel network, Stochastic network, Network reduction, Interdictive Graph, Complexity Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
12725 Public Transport Planning System by Dijkstra Algorithm: Case Study Bangkok Metropolitan Area

Authors: Pimploi Tirastittam, Phutthiwat Waiyawuththanapoom

Abstract:

Nowadays the promotion of the public transportation system in the Bangkok Metropolitan Area is increased such as the “Free Bus for Thai Citizen” Campaign and the prospect of the several MRT       routes to increase the convenient and comfortable to the Bangkok Metropolitan area citizens. But citizens do not make full use of them it because the citizens are lack of the data and information and also the confident to the public transportation system of Thailand especially in the time and safety aspects. This research is the Public Transport Planning System by Dijkstra Algorithm: Case Study Bangkok Metropolitan Area by focusing on buses, BTS and MRT schedules/routes to give the most information to passengers. They can choose the way and the routes easily by using Dijkstra STAR Algorithm of Graph Theory which also shows the fare of the trip. This Application was evaluated by 30 normal users to find the mean and standard deviation of the developed system. Results of the evaluation showed that system is at a good level of satisfaction (4.20 and 0.40). From these results we can conclude that the system can be used properly and effectively according to the objective.

Keywords: Dijkstra Algorithm, Graph Theory, Shortest Route, Public Transport, Bangkok Metropolitan Area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6335
12724 Experimental Modal Analysis and Model Validation of Antenna Structures

Authors: B.R. Potgieter, G. Venter

Abstract:

Numerical design optimization is a powerful tool that can be used by engineers during any stage of the design process. There are many different applications for structural optimization. A specific application that will be discussed in the following paper is experimental data matching. Data obtained through tests on a physical structure will be matched with data from a numerical model of that same structure. The data of interest will be the dynamic characteristics of an antenna structure focusing on the mode shapes and modal frequencies. The structure used was a scaled and simplified model of the Karoo Array Telescope-7 (KAT-7) antenna structure. This kind of data matching is a complex and difficult task. This paper discusses how optimization can assist an engineer during the process of correlating a finite element model with vibration test data.

Keywords: Finite Element Model (FEM), Karoo Array Telescope(KAT-7), modal frequencies, mode shapes, optimization, shape optimization, size optimization, vibration tests

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
12723 A Hamiltonian Decomposition of 5-star

Authors: Walter Hussak, Heiko Schröder

Abstract:

Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.

Keywords: interconnection networks, paths and cycles, graphs andgroups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
12722 Waste to Biofuel by Torrefaction Technology

Authors: Jyh-Cherng Chen, Yu-Zen Lin, Wei-Zhi Chen

Abstract:

Torrefaction is one of waste to energy (WTE) technologies developing in Taiwan recently, which can reduce the moisture and impuritiesand increase the energy density of biowaste effectively.To understand the torrefaction characteristics of different biowaste and the influences of different torrefaction conditions, four typical biowaste were selected to carry out the torrefaction experiments. The physical and chemical properties of different biowaste prior to and after torrefaction were analyzed and compared. Experimental results show that the contents of elemental carbon and caloric value of the four biowaste were significantly increased after torrefaction. The increase of combustible and caloric value in bamboo was the greatest among the four biowaste. The caloric value of bamboo can be increased from 1526 kcal/kg to 6104 kcal/kg after 300oC and 1 hour torrefaction. The caloric valueof torrefied bamboo was almost four times as the original. The increase of elemental carbon content in wood was the greatest (from 41.03% to 75.24%), and the next was bamboo (from 47.07% to 74.63%). The major parameters which affected the caloric value of torrefied biowaste followed the sequence of biowaste kinds, torrefaction time, and torrefaction temperature. The optimal torrefaction conditions of the experiments were bamboo torrefied at 300oC for 3 hours, and the corresponding caloric value of torrefied bamboo was 5953 kcal/kg. This caloric value is similar to that of brown coal or bituminous coal.

Keywords: Torrefaction, waste to energy, calorie, biofuel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
12721 Paremaeter Determination of a Vehicle 5-DOF Model to Simulate Occupant Deceleration in a Frontal Crash

Authors: Javad Marzbanrad, Mostafa Pahlavani

Abstract:

This study has investigated a vehicle Lumped Parameter Model (LPM) in frontal crash. There are several ways for determining spring and damper characteristics and type of problem shall be considered as system identification. This study use Genetic Algorithm (GA) procedure, being an effective procedure in case of optimization issues, for optimizing errors, between target data (experimental data) and calculated results (being obtained by analytical solving). In this study analyzed model in 5-DOF then compared our results with 5-DOF serial model. Finally, the response of model due to external excitement is investigated.

Keywords: Vehicle, Lumped-Parameter Model, GeneticAlgorithm, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
12720 Efficient and Effective Gabor Feature Representation for Face Detection

Authors: Yasuomi D. Sato, Yasutaka Kuriya

Abstract:

We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.

Keywords: Face detection, Gabor wavelet based pyramid, elastic graph matching, topological preservation, redundancy of computational complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
12719 Use of Agricultural Waste for the Removal of Nickel Ions from Aqueous Solutions: Equilibrium and Kinetics Studies

Authors: Manjeet Bansal, Diwan Singh, V.K.Garg, Pawan Rose

Abstract:

The potential of economically cheaper cellulose containing natural materials like rice husk was assessed for nickel adsorption from aqueous solutions. The effects of pH, contact time, sorbent dose, initial metal ion concentration and temperature on the uptake of nickel were studied in batch process. The removal of nickel was dependent on the physico-chemical characteristics of the adsorbent, adsorbate concentration and other studied process parameters. The sorption data has been correlated with Langmuir, Freundlich and Dubinin-Radush kevich (D-R) adsorption models. It was found that Freundlich and Langmuir isotherms fitted well to the data. Maximum nickel removal was observed at pH 6.0. The efficiency of rice husk for nickel removal was 51.8% for dilute solutions at 20 g L-1 adsorbent dose. FTIR, SEM and EDAX were recorded before and after adsorption to explore the number and position of the functional groups available for nickel binding on to the studied adsorbent and changes in surface morphology and elemental constitution of the adsorbent. Pseudo-second order model explains the nickel kinetics more effectively. Reusability of the adsorbent was examined by desorption in which HCl eluted 78.93% nickel. The results revealed that nickel is considerably adsorbed on rice husk and it could be and economic method for the removal of nickel from aqueous solutions.

Keywords: Adsorption, nickel, SEM, EDAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2683
12718 Evaluation of Fuel Properties of Six Tropical Hardwood Timber Species for Briquettes

Authors: S. J. Mitchual, K. Frimpong-Mensah, N. A. Darkwa

Abstract:

The fuel potential of six tropical hardwood species namely: Triplochiton scleroxylon, Ceiba pentandra, Aningeria robusta, Terminalia superba, Celtis mildbreadii and Piptadenia africana were studied. Properties studied included species density, gross calorific value, volatile matter, ash content, organic carbon and elemental composition. Fuel properties were determined using standard laboratory methods. The result indicates that the gross calorific value (GCV) of the species ranged from 20.16 to 22.22 MJ/kg and they slightly varied from each other. Additionally, the GCV of the biomass materials were higher than that of other biomass materials like; wheat straw, rice straw, maize straw and sugar cane. The ash and volatile matter content varied from 0.6075 to 5.0407%, and 75.23% to 83.70% respectively. The overall rating of the properties of the six biomass materials suggested that Piptadenia africana has the best fuel property to be used as briquettes and Aningeria robusta the worse. This study therefore suggests that a holistic assessment of a biomass material needs to be done before selecting it for fuel purpose.

Keywords: Ash content, Briquette, Calorific value, Elemental composition, Species, Volatile matter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726