Search results for: semantic representation.
566 New Multisensor Data Fusion Method Based on Probabilistic Grids Representation
Authors: Zhichao Zhao, Yi Liu, Shunping Xiao
Abstract:
A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Keywords: Cramer-Rao lower bound (CRLB), data fusion, probabilistic grids, joint probability density matrix, localization, sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803565 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)
Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan
Abstract:
The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.Keywords: Biometric, Dorsal vein pattern, PCA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895564 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.
Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81563 Computationally Efficient Adaptive Rate Sampling and Adaptive Resolution Analysis
Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin
Abstract:
Mostly the real life signals are time varying in nature. For proper characterization of such signals, time-frequency representation is required. The STFT (short-time Fourier transform) is a classical tool used for this purpose. The limitation of the STFT is its fixed time-frequency resolution. Thus, an enhanced version of the STFT, which is based on the cross-level sampling, is devised. It can adapt the sampling frequency and the window function length by following the input signal local variations. Therefore, it provides an adaptive resolution time-frequency representation of the input. The computational complexity of the proposed STFT is deduced and compared to the classical one. The results show a significant gain of the computational efficiency and hence of the processing power. The processing error of the proposed technique is also discussed.
Keywords: Level Crossing Sampling, Activity Selection, Adaptive Resolution Analysis, Computational Complexity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1261562 Geometric Representation of Modified Forms of Seven Important Failure Criteria
Authors: Ranajay Bhowmick
Abstract:
Elastoplastic analysis of a structural system involves defining failure/yield criterion, flow rules and hardening rules. The failure/yield criterion defines the limit beyond which the material flows plastically and hardens/softens or remains perfectly plastic before ultimate collapse. The failure/yield criterion is represented geometrically in three/two dimensional Haigh-Westergaard stress-space to facilitate a better understanding of the behavior of the material. In the present study geometric representations in three and two-dimensional stress-space of a few important failure/yield criterion are presented. The criteria presented are the modified forms obtained due to the conditional solutions of the equation of stress invariants. A comparison of the failure/yield surfaces is also presented here to obtain the effectiveness of each of them and it has been found that for identical conditions the Rankine’s criterion gives the largest values of limiting stresses.
Keywords: Deviatoric plane, failure criteria, geometric representation, hydrostatic axis, modified form.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374561 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.
Keywords: Deep learning, data mining, gender predication, MOOCs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363560 A Domain Specific Modeling Language Semantic Model for Artefact Orientation
Authors: Bunakiye R. Japheth, Ogude U. Cyril
Abstract:
Since the process of transforming user requirements to modeling constructs are not very well supported by domain-specific frameworks, it became necessary to integrate domain requirements with the specific architectures to achieve an integrated customizable solutions space via artifact orientation. Domain-specific modeling language specifications of model-driven engineering technologies focus more on requirements within a particular domain, which can be tailored to aid the domain expert in expressing domain concepts effectively. Modeling processes through domain-specific language formalisms are highly volatile due to dependencies on domain concepts or used process models. A capable solution is given by artifact orientation that stresses on the results rather than expressing a strict dependence on complicated platforms for model creation and development. Based on this premise, domain-specific methods for producing artifacts without having to take into account the complexity and variability of platforms for model definitions can be integrated to support customizable development. In this paper, we discuss methods for the integration capabilities and necessities within a common structure and semantics that contribute a metamodel for artifact-orientation, which leads to a reusable software layer with concrete syntax capable of determining design intents from domain expert. These concepts forming the language formalism are established from models explained within the oil and gas pipelines industry.
Keywords: Control process, metrics of engineering, structured abstraction, semantic model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742559 A Robust Salient Region Extraction Based on Color and Texture Features
Authors: Mingxin Zhang, Zhaogan Lu, Junyi Shen
Abstract:
In current common research reports, salient regions are usually defined as those regions that could present the main meaningful or semantic contents. However, there are no uniform saliency metrics that could describe the saliency of implicit image regions. Most common metrics take those regions as salient regions, which have many abrupt changes or some unpredictable characteristics. But, this metric will fail to detect those salient useful regions with flat textures. In fact, according to human semantic perceptions, color and texture distinctions are the main characteristics that could distinct different regions. Thus, we present a novel saliency metric coupled with color and texture features, and its corresponding salient region extraction methods. In order to evaluate the corresponding saliency values of implicit regions in one image, three main colors and multi-resolution Gabor features are respectively used for color and texture features. For each region, its saliency value is actually to evaluate the total sum of its Euclidean distances for other regions in the color and texture spaces. A special synthesized image and several practical images with main salient regions are used to evaluate the performance of the proposed saliency metric and other several common metrics, i.e., scale saliency, wavelet transform modulus maxima point density, and important index based metrics. Experiment results verified that the proposed saliency metric could achieve more robust performance than those common saliency metrics.Keywords: salient regions, color and texture features, image segmentation, saliency metric
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567558 Evolutionary Eigenspace Learning using CCIPCA and IPCA for Face Recognition
Authors: Ghazy M.R. Assassa, Mona F. M. Mursi, Hatim A. Aboalsamh
Abstract:
Traditional principal components analysis (PCA) techniques for face recognition are based on batch-mode training using a pre-available image set. Real world applications require that the training set be dynamic of evolving nature where within the framework of continuous learning, new training images are continuously added to the original set; this would trigger a costly continuous re-computation of the eigen space representation via repeating an entire batch-based training that includes the old and new images. Incremental PCA methods allow adding new images and updating the PCA representation. In this paper, two incremental PCA approaches, CCIPCA and IPCA, are examined and compared. Besides, different learning and testing strategies are proposed and applied to the two algorithms. The results suggest that batch PCA is inferior to both incremental approaches, and that all CCIPCAs are practically equivalent.Keywords: Candid covariance-free incremental principal components analysis (CCIPCA), face recognition, incremental principal components analysis (IPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822557 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches
Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani
Abstract:
This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495556 Composite Kernels for Public Emotion Recognition from Twitter
Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang
Abstract:
The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.
Keywords: Public emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773555 Discrimination of Seismic Signals Using Artificial Neural Networks
Authors: Mohammed Benbrahim, Adil Daoudi, Khalid Benjelloun, Aomar Ibenbrahim
Abstract:
The automatic discrimination of seismic signals is an important practical goal for earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, two classes of seismic signals recorded routinely in geophysical laboratory of the National Center for Scientific and Technical Research in Morocco are considered. They correspond to signals associated to local earthquakes and chemical explosions. The approach adopted for the development of an automatic discrimination system is a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "modified Mexican hat wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.Keywords: Seismic signals, Wavelets, Dimensionality reduction, Artificial neural networks, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634554 Digital Geomatics Trends for Production and Updating Topographic Map by Using Digital Generalization Procedures
Authors: O. Z. Jasim
Abstract:
An accuracy digital map must satisfy the users for two main requirements, first, map must be visually readable and second, all the map elements must be in a good representation. These two requirements hold especially true for map generalization which aims at simplifying the representation of cartographic data. Different scales of maps are very important for any decision in any maps with different scales such as master plan and all the infrastructures maps in civil engineering. Cartographer cannot project the data onto a piece of paper, but he has to worry about its readability. The map layout of any geodatabase is very important, this layout is help to read, analyze or extract information from the map. There are many principles and guidelines of generalization that can be find in the cartographic literature. A manual reduction method for generalization depends on experience of map maker and therefore produces incompatible results. Digital generalization, rooted from conventional cartography, has become an increasing concern in both Geographic Information System (GIS) and mapping fields. This project is intended to review the state of the art of the new technology and help to understand the needs and plans for the implementation of digital generalization capability as well as increase the knowledge of production topographic maps.
Keywords: Cartography, digital generalization, mapping, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289553 Sound Instance: Art, Perception and Composition through Soundscapes
Authors: Ricardo Mestre
Abstract:
The soundscape stands out as an agglomeration of sounds available in the world, associated with different contexts and origins, being a theme studied by various areas of knowledge, seeking to guide their benefits and their consequences, contributing to the welfare of society and other ecosystems. With the objective for a greater recognition of sound reality, through the selection and differentiation of sounds, the soundscape studies focus on the contribution for a better tuning of the world and to the balance and well-being of humanity. Sound environment, produced and created in various ways, can provide various sources of information, contributing to the orientation of the human being, alerting and manipulating him during his daily journey, like small notifications received on a cell phone or other device with these features. In this way, it becomes possible to give sound its due importance in relation to the processes of individual representation, in manners of social, professional and emotional life. Ensuring an individual representation means providing the human being with new tools for the long process of reflection by recognizing his environment, the sounds that represent him, and his perspective on his respective function in it. In order to provide more information about the importance of the sound environment inherent to the individual reality, one introduces the term sound instance, in order to refer to the whole sound field existing in the individual's life, which is divided into four distinct subfields, but essential to the process of individual representation, called sound matrix, sound cycles, sound traces and sound interference. Alongside volunteers we were able to create six representations of sound instances, based on the individual perception of his/her life, focusing on the present, past and future. With this investigation it was possible to determine that sound instance has a tool for self-recognition, considering the statements of opinion about the experience from the volunteers, reflecting about the three time lines, based on memories, thoughts and wishes.
Keywords: Sound instance, soundscape, sound art, self-recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 579552 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect
Authors: A. Kojah, A. Nacaroğlu
Abstract:
Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.
Keywords: Energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168551 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454550 Highlighting Document's Structure
Authors: Sylvie Ratté, Wilfried Njomgue, Pierre-André Ménard
Abstract:
In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).
Keywords: Information retrieval, document structures, symbolic grammars.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227549 Utilizing Ontologies Using Ontology Editor for Creating Initial Unified Modeling Language (UML)Object Model
Authors: Waralak Vongdoiwang Siricharoen
Abstract:
One of object oriented software developing problem is the difficulty of searching the appropriate and suitable objects for starting the system. In this work, ontologies appear in the part of supporting the object discovering in the initial of object oriented software developing. There are many researches try to demonstrate that there is a great potential between object model and ontologies. Constructing ontology from object model is called ontology engineering can be done; On the other hand, this research is aiming to support the idea of building object model from ontology is also promising and practical. Ontology classes are available online in any specific areas, which can be searched by semantic search engine. There are also many helping tools to do so; one of them which are used in this research is Protégé ontology editor and Visual Paradigm. To put them together give a great outcome. This research will be shown how it works efficiently with the real case study by using ontology classes in travel/tourism domain area. It needs to combine classes, properties, and relationships from more than two ontologies in order to generate the object model. In this paper presents a simple methodology framework which explains the process of discovering objects. The results show that this framework has great value while there is possible for expansion. Reusing of existing ontologies offers a much cheaper alternative than building new ones from scratch. More ontologies are becoming available on the web, and online ontologies libraries for storing and indexing ontologies are increasing in number and demand. Semantic and Ontologies search engines have also started to appear, to facilitate search and retrieval of online ontologies.Keywords: Software Developing, Ontology, Ontology Library, Artificial Intelligent, Protégé, Object Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878548 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization
Authors: Hebberly Ahatlan
Abstract:
The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.
Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120547 A Study of Semantic Analysis of LED Illustrated Traffic Directional Arrow in Different Style
Authors: Chia-Chen Wu, Chih-Fu Wu, Pey-Weng Lien, Kai-Chieh Lin
Abstract:
In the past, the most comprehensively adopted light source was incandescent light bulbs, but with the appearance of LED light sources, traditional light sources have been gradually replaced by LEDs because of its numerous superior characteristics. However, many of the standards do not apply to LEDs as the two light sources are characterized differently. This also intensifies the significance of studies on LEDs. As a Kansei design study investigating the visual glare produced by traffic arrows implemented with LEDs, this study conducted a semantic analysis on the styles of traffic arrows used in domestic and international occasions. The results will be able to reduce drivers’ misrecognition that results in the unsuccessful arrival at the destination, or in traffic accidents. This study started with a literature review and surveyed the status quo before conducting experiments that were divided in two parts. The first part involved a screening experiment of arrow samples, where cluster analysis was conducted to choose five representative samples of LED displays. The second part was a semantic experiment on the display of arrows using LEDs, where the five representative samples and the selected ten adjectives were incorporated. Analyzing the results with Quantification Theory Type I, it was found that among the composition of arrows, fletching was the most significant factor that influenced the adjectives. In contrast, a “no fletching” design was more abstract and vague. It lacked the ability to convey the intended message and might bear psychological negative connotation including “dangerous,” “forbidden,” and “unreliable.” The arrow design consisting of “> shaped fletching” was found to be more concrete and definite, showing positive connotation including “safe,” “cautious,” and “reliable.” When a stimulus was placed at a farther distance, the glare could be significantly reduced; moreover, the visual evaluation scores would be higher. On the contrary, if the fletching and the shaft had a similar proportion, looking at the stimuli caused higher evaluation at a closer distance. The above results will be able to be applied to the design of traffic arrows by conveying information definitely and rapidly. In addition, drivers’ safety could be enhanced by understanding the cause of glare and improving visual recognizability.
Keywords: LED, arrow, Kansei research, preferred imagery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950546 Journey on Image Clustering Based on Color Composition
Authors: Achmad Nizar Hidayanto, Elisabeth Martha Koeanan
Abstract:
Image clustering is a process of grouping images based on their similarity. The image clustering usually uses the color component, texture, edge, shape, or mixture of two components, etc. This research aims to explore image clustering using color composition. In order to complete this image clustering, three main components should be considered, which are color space, image representation (feature extraction), and clustering method itself. We aim to explore which composition of these factors will produce the best clustering results by combining various techniques from the three components. The color spaces use RGB, HSV, and L*a*b* method. The image representations use Histogram and Gaussian Mixture Model (GMM), whereas the clustering methods use KMeans and Agglomerative Hierarchical Clustering algorithm. The results of the experiment show that GMM representation is better combined with RGB and L*a*b* color space, whereas Histogram is better combined with HSV. The experiments also show that K-Means is better than Agglomerative Hierarchical for images clustering.Keywords: Image clustering, feature extraction, RGB, HSV, L*a*b*, Gaussian Mixture Model (GMM), histogram, Agglomerative Hierarchical Clustering (AHC), K-Means, Expectation-Maximization (EM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206545 LOWL: Logic and OWL, an Extension
Authors: M. Mohsenzadeh, F. Shams, M. Teshnehlab
Abstract:
Current research on semantic web aims at making intelligent web pages meaningful for machines. In this way, ontology plays a primary role. We believe that logic can help ontology languages (such as OWL) to be more fluent and efficient. In this paper we try to combine logic with OWL to reduce some disadvantages of this language. Therefore we extend OWL by logic and also show how logic can satisfy our future expectations of an ontology language.
Keywords: Logical Programming, OWL, Language Extension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560544 The Role of Planning and Memory in the Navigational Ability
Authors: Greeshma Sharma, Sushil Chandra, Vijander Singh, Alok Prakash Mittal
Abstract:
Navigational ability requires spatial representation, planning, and memory. It covers three interdependent domains, i.e. cognitive and perceptual factors, neural information processing, and variability in brain microstructure. Many attempts have been made to see the role of spatial representation in the navigational ability, and the individual differences have been identified in the neural substrate. But, there is also a need to address the influence of planning, memory on navigational ability. The present study aims to evaluate relations of aforementioned factors in the navigational ability. Total 30 participants volunteered in the study of a virtual shopping complex and subsequently were classified into good and bad navigators based on their performances. The result showed that planning ability was the most correlated factor for the navigational ability and also the discriminating factor between the good and bad navigators. There was also found the correlations between spatial memory recall and navigational ability. However, non-verbal episodic memory and spatial memory recall were also found to be correlated with the learning variable. This study attempts to identify differences between people with more and less navigational ability on the basis of planning and memory.
Keywords: Memory, planning navigational ability, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441543 Designing Pictogram for Food Portion Size
Authors: Y.C. Liu, S.J. Lu, Y.C. Weng, H. Su
Abstract:
The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00<0.05). In Experiment 2, the correcting rate of selecting the right life-size measurement aid is 65.02% (S.D.=21.31). The result showed the potential of the approach for certain food potion sizes. Issues raised for discussions including comprehension on numerous food varieties in an open environment, selection of photograph or drawing, reasons of different correcting rates for the measurement aid. This research also could be used for those interested in systematic and pictorial representation of dietary portion size information.Keywords: Comprehension, Food Portion Size, Model of DietaryInformation, Pictogram Design, USP-DI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936542 Palmprint based Cancelable Biometric Authentication System
Authors: Ying-Han Pang, Andrew Teoh Beng Jin, David Ngo Chek Ling
Abstract:
A cancelable palmprint authentication system proposed in this paper is specifically designed to overcome the limitations of the contemporary biometric authentication system. In this proposed system, Geometric and pseudo Zernike moments are employed as feature extractors to transform palmprint image into a lower dimensional compact feature representation. Before moment computation, wavelet transform is adopted to decompose palmprint image into lower resolution and dimensional frequency subbands. This reduces the computational load of moment calculation drastically. The generated wavelet-moment based feature representation is used to generate cancelable verification key with a set of random data. This private binary key can be canceled and replaced. Besides that, this key also possesses high data capture offset tolerance, with highly correlated bit strings for intra-class population. This property allows a clear separation of the genuine and imposter populations, as well as zero Equal Error Rate achievement, which is hardly gained in the conventional biometric based authentication system.Keywords: Cancelable biometric authenticator, Discrete- Hashing, Moments, Palmprint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565541 Automatic Change Detection for High-Resolution Satellite Images of Urban and Suburban Areas
Authors: Antigoni Panagiotopoulou, Lemonia Ragia
Abstract:
High-resolution satellite images can provide detailed information about change detection on the earth. In the present work, QuickBird images of spatial resolution 60 cm/pixel and WorldView images of resolution 30 cm/pixel are utilized to perform automatic change detection in urban and suburban areas of Crete, Greece. There is a relative time difference of 13 years among the satellite images. Multiindex scene representation is applied on the images to classify the scene into buildings, vegetation, water and ground. Then, automatic change detection is made possible by pixel-per-pixel comparison of the classified multi-temporal images. The vegetation index and the water index which have been developed in this study prove effective. Furthermore, the proposed change detection approach not only indicates whether changes have taken place or not but also provides specific information relative to the types of changes. Experimentations with other different scenes in the future could help optimize the proposed spectral indices as well as the entire change detection methodology.Keywords: Change detection, multiindex scene representation, spectral index, QuickBird, WorldView.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477540 Cooperative Sensing for Wireless Sensor Networks
Authors: Julien Romieux, Fabio Verdicchio
Abstract:
Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.Keywords: Cooperative signal processing, power management, signal representation, signal approximation, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786539 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
This paper aims to provide an interpretation of artificial neural networks (ANNs) and explore some of its implications. The interpretation views ANNs as a memory which encodes instances of experience. An experiment explores the behavior of encoding and retrieval of instances from memory. A localised representation ANN is created that allows control over encoding and retrieved memory sample size and is experimented with using the MNIST digits dataset. The relationship between input familiarity, conflict within retrieved samples, and error rates is described and demonstrated to be an effective driver for memory encoding. Results indicate that selective encoding and retrieval samples that allow detection of memory conflicts produce optimal performance, and that error rates are normally distributed with input familiarity and conflict. By using input familiarity and sample consistency to guide memory encoding, the number of encoding trials on the dataset were reduced to 18.33% of the training data while maintaining good recognition performance on the test data.
Keywords: Artificial Neural Networks, ANNs, representation, memory, conflict monitoring, confidence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507538 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm
Authors: Yesubai Rubavathi Charles, Ravi Ramraj
Abstract:
In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822537 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing
Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill
Abstract:
In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.Keywords: Idea ontology, innovation management, open innovation, semantic search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 785