Search results for: Vector Space Document Model
8374 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho
Abstract:
Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15498373 Automatic Musical Genre Classification Using Divergence and Average Information Measures
Authors: Hassan Ezzaidi, Jean Rouat
Abstract:
Recently many research has been conducted to retrieve pertinent parameters and adequate models for automatic music genre classification. In this paper, two measures based upon information theory concepts are investigated for mapping the features space to decision space. A Gaussian Mixture Model (GMM) is used as a baseline and reference system. Various strategies are proposed for training and testing sessions with matched or mismatched conditions, long training and long testing, long training and short testing. For all experiments, the file sections used for testing are never been used during training. With matched conditions all examined measures yield the best and similar scores (almost 100%). With mismatched conditions, the proposed measures yield better scores than the GMM baseline system, especially for the short testing case. It is also observed that the average discrimination information measure is most appropriate for music category classifications and on the other hand the divergence measure is more suitable for music subcategory classifications.Keywords: Audio feature, information measures, music genre.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15778372 Application of a Similarity Measure for Graphs to Web-based Document Structures
Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser
Abstract:
Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18928371 The Relations between Spatial Structure and Land Price
Authors: Jung-Hun Cho, Tae-Heon Moon, Jin-Hak Lee
Abstract:
Land price contains the comprehensive characteristics of urban space, representing the social and economic features of the city. Accordingly, land price can be utilized as an indicator, which can identify the changes of spatial structure and socioeconomic variations caused by urban development. This study attempted to explore the changes in land price by a new road construction. Methodologically, it adopted Space Syntax, which can interpret urban spatial structure comprehensively, to identify the relationship between the forms of road networks and land price. The result of the regression analysis showed the ‘integration index’ of Space Syntax is statistically significant and has a strong correlation with land price. If the integration value is high, land price increases proportionally. Subsequently, using regression equation, it tried to predict the land price changes of each of the lots surrounding the roads that are newly opened. The research methods or study results have the advantage of predicting the changes in land price in an easy way. In addition, it will contribute to planners and project managers to establish relevant polices and smoothing urban regeneration projects through enhancing residents’ understanding by providing possible results and advantages in their land price before the execution of urban regeneration and development projects.
Keywords: Space syntax, urban regeneration, spatial structure, official land price.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12808370 Comparative Spatial Analysis of a Re-arranged Hospital Building
Authors: Burak Köken, Hatice D. Arslan, Bilgehan Y. Çakmak
Abstract:
Analyzing the relation networks between the hospital buildings which have complex structure and distinctive spatial relationships is quite difficult. The hospital buildings which require specialty in spatial relationship solutions during design and selfinnovation through the developing technology should survive and keep giving service even after the disasters such as earthquakes. In this study, a hospital building where the load-bearing system was strengthened because of the insufficient earthquake performance and the construction of an additional building was required to meet the increasing need for space was discussed and a comparative spatial evaluation of the hospital building was made with regard to its status before the change and after the change. For this reason, spatial organizations of the building before change and after the change were analyzed by means of Space Syntax method and the effects of the change on space organization parameters were searched by applying an analytical procedure. Using Depthmap UCL software, Connectivity, Visual Mean Depth, Beta and Visual Integration analyses were conducted. Based on the data obtained after the analyses, it was seen that the relationships between spaces of the building increased after the change and the building has become more explicit and understandable for the occupants. Furthermore, it was determined according to findings of the analysis that the increase in depth causes difficulty in perceiving the spaces and the changes considering this problem generally ease spatial use.Keywords: Architecture, hospital building, space syntax, strengthening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22098369 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.Keywords: Enhanced ideal gas molecular movement, ideal gas molecular movement, model updating method, probability-based damage detection, uncertainty quantification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10768368 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software
Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao
Abstract:
Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.Keywords: IMA, ARINC653, AADL653, code generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30388367 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.
Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11178366 Trust and Reputation Mechanism with Path Optimization in Multipath Routing
Authors: Ramya Dorai, M. Rajaram
Abstract:
A Mobile Adhoc Network (MANET) is a collection of mobile nodes that communicate with each other with wireless links and without pre-existing communication infrastructure. Routing is an important issue which impacts network performance. As MANETs lack central administration and prior organization, their security concerns are different from those of conventional networks. Wireless links make MANETs susceptible to attacks. This study proposes a new trust mechanism to mitigate wormhole attack in MANETs. Different optimization techniques find available optimal path from source to destination. This study extends trust and reputation to an improved link quality and channel utilization based Adhoc Ondemand Multipath Distance Vector (AOMDV). Differential Evolution (DE) is used for optimization.
Keywords: Mobile Adhoc Network (MANET), Adhoc Ondemand Multi-Path Distance Vector (AOMDV), Trust and Reputation, Differential Evolution (DE), Link Quality, Channel Utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16598365 Content-Based Image Retrieval Using HSV Color Space Features
Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari
Abstract:
In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.
Keywords: Content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6608364 A Family of Distributions on Learnable Problems without Uniform Convergence
Authors: César Garza
Abstract:
In supervised binary classification and regression problems, it is well-known that learnability is equivalent to uniform convergence of the hypothesis class, and if a problem is learnable, it is learnable by empirical risk minimization. For the general learning setting of unsupervised learning tasks, there are non-trivial learning problems where uniform convergence does not hold. We present here the task of learning centers of mass with an extra feature that “activates” some of the coordinates over the unit ball in a Hilbert space. We show that the learning problem is learnable under a stable RLM rule. We introduce a family of distributions over the domain space with some mild restrictions for which the sample complexity of uniform convergence for these problems must grow logarithmically with the dimension of the Hilbert space. If we take this dimension to infinity, we obtain a learnable problem for which the uniform convergence property fails for a vast family of distributions.
Keywords: Statistical learning theory, learnability, uniform convergence, stability, regularized loss minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3548363 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets Using an Open-Source Energy System Optimization Model
Authors: A. Balbo, G. Colucci, M. Nicoli, L. Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results are ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.
Keywords: Decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6968362 A Coherent Relationship between EconomicGrowth and Unemployment: An Empirical Evidence from Pakistan
Authors: T. Hussain, M. W. Siddiqi, A. Iqbal
Abstract:
The study is aimed to test causal relationship between growth and unemployment, using time series data for Pakistan from 1972 to 2006. Growth is considered to be a pathway to decrease the level of unemployment. Unemployment is a social and political issue. It is a phenomenon where human resources are wasted leading to deacceleration in growth. Johanson Cointegration shows that there is long run relationship between growth and unemployment. For short run dynamics and causality, the study utilizes Vector Error Correction Model (VECM). The results of VECM indicate that there is short and long run causal relation between growth and unemployment including capital, labor and human capital as explanatory variables.Keywords: Economic Growth, Unemployment, Cointegrationand Causality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31768361 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.
Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45248360 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy
Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang
Abstract:
The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.Keywords: Cross-validation support vector machine, refined composite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8578359 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network
Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm
Abstract:
In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18358358 Analytical and Statistical Study of the Parameters of Expansive Soil
Authors: A. Medjnoun, R. Bahar
Abstract:
The disorders caused by the shrinking-swelling phenomenon are prevalent in arid and semi-arid in the presence of swelling clay. This soil has the characteristic of changing state under the effect of water solicitation (wetting and drying). A set of geotechnical parameters is necessary for the characterization of this soil type, such as state parameters, physical and chemical parameters and mechanical parameters. Some of these tests are very long and some are very expensive, hence the use or methods of predictions. The complexity of this phenomenon and the difficulty of its characterization have prompted researchers to use several identification parameters in the prediction of swelling potential. This document is an analytical and statistical study of geotechnical parameters affecting the potential of swelling clays. This work is performing on a database obtained from investigations swelling Algerian soil. The obtained observations have helped us to understand the soil swelling structure and its behavior.
Keywords: Analysis, estimated model, parameter identification, Swelling of clay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12898357 Meta-requirements that Model Change
Authors: Gouri Prakash
Abstract:
One of the common problems encountered in software engineering is addressing and responding to the changing nature of requirements. While several approaches have been devised to address this issue, ranging from instilling resistance to changing requirements in order to mitigate impact to project schedules, to developing an agile mindset towards requirements, the approach discussed in this paper is one of conceptualizing the delta in requirement and modeling it, in order to plan a response to it. To provide some context here, change is first formally identified and categorized as either formal change or informal change. While agile methodology facilitates informal change, the approach discussed in this paper seeks to develop the idea of facilitating formal change. To collect, document meta-requirements that represent the phenomena of change would be a pro-active measure towards building a realistic cognition of the requirements entity that can further be harnessed in the software engineering process.Keywords: Change Management, Agile methodology, Metarequirements
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15438356 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.
Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19628355 New Multipath Node-Disjoint Routing Based on AODV Protocol
Authors: V. Zangeneh, S. Mohammadi
Abstract:
Today, node-disjoint routing becomes inessential technique in communication of packets among various nodes in networks. Meanwhile AODV (Ad Hoc On-demand Multipath Distance Vector) creates single-path route between a pair of source and destination nodes. Some researches has done so far to make multipath node-disjoint routing based on AODV protocol. But however their overhead and end-to-end delay are relatively high, while the detail of their code is not available too. This paper proposes a new approach of multipath node-disjoint routing based on AODV protocol. Then the algorithm of analytical model is presented. The extensive results of this algorithm will be presented in the next paper.Keywords: AODV; MANET; Multipath Routing; Node-disjoint;transmission delay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30748354 Distinction between Manifestations of Diabetic Retinopathy and Dust Artifacts Using Three-Dimensional HSV Color Space
Authors: Naoto Suzuki
Abstract:
Many ophthalmologists find it difficult to distinguish between small retinal hemorrhages and dust artifacts when using fundus photography for the diagnosis of diabetic retinopathy. Six patients with diabetic retinopathy underwent fundus photography, which revealed dust artifacts in the photographs of some patients. We constructed an experimental device similar to the optical system of the fundus camera and colored the fundi of the artificial eyes with khaki, sunset, rose and sunflower colors. Using the experimental device, we photographed dust artifacts using each artificial eyes. We used Scilab 5.4.0 and SIVP 0.5.3 softwares to convert the red, green, and blue (RGB) color space to the hue, saturation, and value (HSV) color space. We calculated the differences between the areas of manifestations and perimanifestations and the areas of dust artifacts and periartifacts using average HSVs. The V values in HSV for the manifestations were as follows: hemorrhages, 0.06 ± 0.03; hard exudates, −0.12 ± 0.06; and photocoagulation marks, 0.07 ± 0.02. For dust artifacts, visualized in the human and artificial eyes, the V values were as follows: human eye, 0.19 ± 0.03; khaki, 0.41 ± 0.02; sunset, 0.43 ± 0.04; rose, 0.47 ± 0.11; and sunflower, 0.59 ± 0.07. For the human and artificial eyes, we calculated two sensitivity values of dust artifacts compared to manifestation areas. V values of the HSV color space enabled the differentiation of small hemorrhages, hard exudates, and photocoagulation marks from dust artifacts.Keywords: Diabetic retinopathy, HSV color space, small hemorrhages, hard exudates, photocoagulation marks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12088353 Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation
Authors: Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour
Abstract:
Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Keywords: Answer processing, answer validation, classification, question answering, query reformulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28478352 An Intelligent WSN-Based Parking Guidance System
Authors: Sheng-Shih Wang, Wei-Ting Wang
Abstract:
This paper designs an intelligent guidance system, based on wireless sensor networks, for efficient parking in parking lots. The proposed system consists of a parking space allocation subsystem, a parking space monitoring subsystem, a driving guidance subsystem, and a vehicle detection subsystem. In the system, we propose a novel and effective virtual coordinate system for sensing and displaying devices to determine the proper vacant parking space and provide the precise guidance to the driver. This study constructs a ZigBee-based wireless sensor network on Arduino platform and implements the prototype of the proposed system using Arduino-based complements. Experimental results confirm that the proposed prototype can not only work well, but also provide drivers the correct parking information.
Keywords: Arduino, Parking guidance, Wireless sensor network, ZigBee.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21948351 A Generalised Relational Data Model
Authors: Georgia Garani
Abstract:
A generalised relational data model is formalised for the representation of data with nested structure of arbitrary depth. A recursive algebra for the proposed model is presented. All the operations are formally defined. The proposed model is proved to be a superset of the conventional relational model (CRM). The functionality and validity of the model is shown by a prototype implementation that has been undertaken in the functional programming language Miranda.Keywords: nested relations, recursive algebra, recursive nested operations, relational data model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15598350 A Comparison of SVM-based Criteria in Evolutionary Method for Gene Selection and Classification of Microarray Data
Authors: Rameswar Debnath, Haruhisa Takahashi
Abstract:
An evolutionary method whose selection and recombination operations are based on generalization error-bounds of support vector machine (SVM) can select a subset of potentially informative genes for SVM classifier very efficiently [7]. In this paper, we will use the derivative of error-bound (first-order criteria) to select and recombine gene features in the evolutionary process, and compare the performance of the derivative of error-bound with the error-bound itself (zero-order) in the evolutionary process. We also investigate several error-bounds and their derivatives to compare the performance, and find the best criteria for gene selection and classification. We use 7 cancer-related human gene expression datasets to evaluate the performance of the zero-order and first-order criteria of error-bounds. Though both criteria have the same strategy in theoretically, experimental results demonstrate the best criterion for microarray gene expression data.Keywords: support vector machine, generalization error-bound, feature selection, evolutionary algorithm, microarray data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15368349 Performance Comparison and Analysis of Table-Driven and On-Demand Routing Protocols for Mobile Ad-hoc Networks
Authors: Narendra Singh Yadav, R.P.Yadav
Abstract:
Mobile ad hoc network is a collection of mobile nodes communicating through wireless channels without any existing network infrastructure or centralized administration. Because of the limited transmission range of wireless network interfaces, multiple "hops" may be needed to exchange data across the network. In order to facilitate communication within the network, a routing protocol is used to discover routes between nodes. The primary goal of such an ad hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption. This paper examines two routing protocols for mobile ad hoc networks– the Destination Sequenced Distance Vector (DSDV), the table- driven protocol and the Ad hoc On- Demand Distance Vector routing (AODV), an On –Demand protocol and evaluates both protocols based on packet delivery fraction, normalized routing load, average delay and throughput while varying number of nodes, speed and pause time.Keywords: AODV, DSDV, MANET, relative performance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37628348 Machine Learning Approach for Identifying Dementia from MRI Images
Authors: S. K. Aruna, S. Chitra
Abstract:
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21168347 Contactless and Multiple Space Debris Removal by Micro to Nano Satellites
Authors: Junichiro Kawaguchi
Abstract:
Space debris problems have emerged and threatened the use of low earth orbit around the Earth owing to a large number of spacecrafts.. The robots should be sophisticated enough to access automatically the debris articulating the attitude and the translation motion with respect to the debris. This paper presents the idea of using the torpedo-like third unsophisticated and disposable body, in addition to the first body of the servicing robot and the second body of the target debris. The third body is launched from the first body from a distance further than the size of the second body. This paper presents the method and the system, so that the third body is launched from the first body. The third body carries both a net and an inflatable or extendible drag deceleration device and is built small and light. This method enables even micro to nano satellites to perform contactless and multiple debris removal even via a single flight.
Keywords: Ballute, Debris Removal, Echo satellite, Gossamer, Gun-Net, Inflatable Space Structure, Small Satellite, Un-cooperated Target.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3728346 Computational Assistance of the Research, Using Dynamic Vector Logistics of Processes for Critical Infrastructure Subjects Continuity
Authors: J. Urbánek Jiří, Krahulec Josef, Johanidesová Jitka, F. Urbánek Jiří
Abstract:
This paper deals with using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It serves for crisis situations investigation and modelling within the organizations of critical infrastructure. In first part of paper, it will be introduced entities, operators, and actors of DYVELOP method. It uses just three operators of Boolean algebra and four types of the entities: the Environments, the Process Systems, the Cases, and the Controlling. The Process Systems (PrS) have five “brothers”: Management PrS, Transformation PrS, Logistic PrS, Event PrS and Operation PrS. The Cases have three “sisters”: Process Cell Case, Use Case, and Activity Case. They all need for the controlling of their functions special Ctrl actors, except ENV – it can do without Ctrl. Model´s maps are named the Blazons and they are able mathematically - graphically express the relationships among entities, actors and processes. In second part of this paper, the rich blazons of DYVELOP method will be used for the discovering and modelling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission. The crisis management of energetic crisis infrastructure organization is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is necessary condition for the encompassment for both emergency events and the mitigation of organization´s damages. Uninterrupted and continuous cycling process brings for crisis management fruitfulness and it is good indicator and controlling actor of organizational continuity and its sustainable development advanced possibilities. The research reliable rules are derived for the safety and reliable continuity of energetic critical infrastructure organization in the crisis situation.Keywords: Blazons, computational assistance, DYVELOP method, critical infrastructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16348345 A New Method for Estimating the Mass Recession Rate for Ablator Systems
Authors: Bianca A. Szasz, Keiichi Okuyama
Abstract:
As the human race will continue to explore the space by creating new space transportation means and sending them to other planets, the enhance of atmospheric reentry study is crucial. In this context, an analysis of mass recession rate of ablative materials for thermal shields of reentry spacecrafts is important to be carried out. The paper describes a new estimation method for calculating the mass recession of an ablator system made of carbon fiber reinforced plastic materials. This method is based on Arrhenius equation for low temperatures and, for high temperatures, on a theory applied for the recession phenomenon of carbon fiber reinforced plastic materials, theory which takes into account the presence of the resin inside the materials. The space mission of USERS spacecraft is considered as a case study.
Keywords: Ablator system, mass recession, spacecraft, atmospheric reentry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2112