Search results for: information entropy.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1762

Search results for: information entropy.

1042 SIFT Accordion: A Space-Time Descriptor Applied to Human Action Recognition

Authors: Olfa.Ben Ahmed, Mahmoud. Mejdoub, Chokri. Ben Amar

Abstract:

Recognizing human action from videos is an active field of research in computer vision and pattern recognition. Human activity recognition has many potential applications such as video surveillance, human machine interaction, sport videos retrieval and robot navigation. Actually, local descriptors and bag of visuals words models achieve state-of-the-art performance for human action recognition. The main challenge in features description is how to represent efficiently the local motion information. Most of the previous works focus on the extension of 2D local descriptors on 3D ones to describe local information around every interest point. In this paper, we propose a new spatio-temporal descriptor based on a spacetime description of moving points. Our description is focused on an Accordion representation of video which is well-suited to recognize human action from 2D local descriptors without the need to 3D extensions. We use the bag of words approach to represent videos. We quantify 2D local descriptor describing both temporal and spatial features with a good compromise between computational complexity and action recognition rates. We have reached impressive results on publicly available action data set

Keywords: Accordion, Bag of Features, Human action, Motion, Moving point, Space-Time Descriptor, SIFT, Video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
1041 Mining Genes Relations in Microarray Data Combined with Ontology in Colon Cancer Automated Diagnosis System

Authors: A. Gruzdz, A. Ihnatowicz, J. Siddiqi, B. Akhgar

Abstract:

MATCH project [1] entitle the development of an automatic diagnosis system that aims to support treatment of colon cancer diseases by discovering mutations that occurs to tumour suppressor genes (TSGs) and contributes to the development of cancerous tumours. The constitution of the system is based on a) colon cancer clinical data and b) biological information that will be derived by data mining techniques from genomic and proteomic sources The core mining module will consist of the popular, well tested hybrid feature extraction methods, and new combined algorithms, designed especially for the project. Elements of rough sets, evolutionary computing, cluster analysis, self-organization maps and association rules will be used to discover the annotations between genes, and their influence on tumours [2]-[11]. The methods used to process the data have to address their high complexity, potential inconsistency and problems of dealing with the missing values. They must integrate all the useful information necessary to solve the expert's question. For this purpose, the system has to learn from data, or be able to interactively specify by a domain specialist, the part of the knowledge structure it needs to answer a given query. The program should also take into account the importance/rank of the particular parts of data it analyses, and adjusts the used algorithms accordingly.

Keywords: Bioinformatics, gene expression, ontology, selforganizingmaps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
1040 Emotion Classification for Students with Autism in Mathematics E-learning using Physiological and Facial Expression Measures

Authors: Hui-Chuan Chu, Min-Ju Liao, Wei-Kai Cheng, William Wei-Jen Tsai, Yuh-Min Chen

Abstract:

Avoiding learning failures in mathematics e-learning environments caused by emotional problems in students with autism has become an important topic for combining of special education with information and communications technology. This study presents an adaptive emotional adjustment model in mathematics e-learning for students with autism, emphasizing the lack of emotional perception in mathematics e-learning systems. In addition, an emotion classification for students with autism was developed by inducing emotions in mathematical learning environments to record changes in the physiological signals and facial expressions of students. Using these methods, 58 emotional features were obtained. These features were then processed using one-way ANOVA and information gain (IG). After reducing the feature dimension, methods of support vector machines (SVM), k-nearest neighbors (KNN), and classification and regression trees (CART) were used to classify four emotional categories: baseline, happy, angry, and anxious. After testing and comparisons, in a situation without feature selection, the accuracy rate of the SVM classification can reach as high as 79.3-%. After using IG to reduce the feature dimension, with only 28 features remaining, SVM still has a classification accuracy of 78.2-%. The results of this research could enhance the effectiveness of eLearning in special education.

Keywords: Emotion classification, Physiological and facial Expression measures, Students with autism, Mathematics e-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
1039 Building a Personalized Multidimensional Intelligent Learning System

Authors: Lun-Ping Hung, Nan-Chen Hsieh, Chia-Ling Ho, Chien-Liang Chen

Abstract:

Currently, most of distance learning courses can only deliver standard material to students. Students receive course content passively which leads to the neglect of the goal of education – “to suit the teaching to the ability of students". Providing appropriate course content according to students- ability is the main goal of this paper. Except offering a series of conventional learning services, abundant information available, and instant message delivery, a complete online learning environment should be able to distinguish between students- ability and provide learning courses that best suit their ability. However, if a distance learning site contains well-designed course content and design but fails to provide adaptive courses, students will gradually loss their interests and confidence in learning and result in ineffective learning or discontinued learning. In this paper, an intelligent tutoring system is proposed and it consists of several modules working cooperatively in order to build an adaptive learning environment for distance education. The operation of the system is based on the result of Self-Organizing Map (SOM) to divide students into different groups according to their learning ability and learning interests and then provide them with suitable course content. Accordingly, the problem of information overload and internet traffic problem can be solved because the amount of traffic accessing the same content is reduced.

Keywords: Distance Learning, Intelligent Tutoring System(ITS), Self-Organizing Map (SOM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
1038 Speaker Identification using Neural Networks

Authors: R.V Pawar, P.P.Kajave, S.N.Mali

Abstract:

The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance.

Keywords: Average Mean Distance function, Backpropogation, Linear Predictive Coding, MultilayeredPerceptron,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
1037 An Evaluation of Land Use Control in Hokkaido, Japan

Authors: Kayoko Yamamoto

Abstract:

This study focuses on an evaluation of Hokkaido which is the northernmost and largest prefecture by surface area in Japan and particularly on two points: the rivalry between all kinds of land use such as urban land and agricultural and forestry land in various cities and their surrounding areas and the possibilities for forestry biomass in areas other than those mentioned above and grasps which areas require examination of the nature of land use control and guidance through conducting land use analysis at the district level using GIS (Geographic Information Systems). The results of analysis in this study demonstrated that it is essential to divide the whole of Hokkaido into two areas: those within delineated city planning areas and those outside of delineated city planning areas and to conduct an evaluation of each land use control. In delineated urban areas, particularly urban areas, it is essential to re-examine land use from the point of view of compact cities or smart cities along with conducting an evaluation of land use control that focuses on issues of rivalry between all kinds of land use such as urban land and agricultural and forestry land. In areas outside of delineated urban areas, it is desirable to aim to build a specific community recycling range based on forest biomass utilization by conducting an evaluation of land use control concerning the possibilities for forest biomass focusing particularly on forests within and outside of city planning areas.

Keywords: Land Use Control, Urbanization, Forestry Biomass, Geographic Information Systems (GIS), Hokkaido

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2548
1036 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2650
1035 FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes

Authors: M. K. Bhuyan

Abstract:

The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.

Keywords: Hand gesture, MPEG-4, Hausdorff distance, finite state machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
1034 3D Dense Correspondence for 3D Dense Morphable Face Shape Model

Authors: Tae in Seol, Sun-Tae Chung, Seongwon Cho

Abstract:

Realistic 3D face model is desired in various applications such as face recognition, games, avatars, animations, and etc. Construction of 3D face model is composed of 1) building a face shape model and 2) rendering the face shape model. Thus, building a realistic 3D face shape model is an essential step for realistic 3D face model. Recently, 3D morphable model is successfully introduced to deal with the various human face shapes. 3D dense correspondence problem should be precedently resolved for constructing a realistic 3D dense morphable face shape model. Several approaches to 3D dense correspondence problem in 3D face modeling have been proposed previously, and among them optical flow based algorithms and TPS (Thin Plate Spline) based algorithms are representative. Optical flow based algorithms require texture information of faces, which is sensitive to variation of illumination. In TPS based algorithms proposed so far, TPS process is performed on the 2D projection representation in cylindrical coordinates of the 3D face data, not directly on the 3D face data and thus errors due to distortion in data during 2D TPS process may be inevitable. In this paper, we propose a new 3D dense correspondence algorithm for 3D dense morphable face shape modeling. The proposed algorithm does not need texture information and applies TPS directly on 3D face data. Through construction procedures, it is observed that the proposed algorithm constructs realistic 3D face morphable model reliably and fast.

Keywords: 3D Dense Correspondence, 3D Morphable Face Shape Model, 3D Face Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
1033 An Analysis of Economic Capital Allocation of Global Banks

Authors: Petr Teply, Ondrej Vejdovec

Abstract:

There are three main ways of categorizing capital in banking operations: accounting, regulatory and economic capital. However, the 2008-2009 global crisis has shown that none of these categories adequately reflects the real risks of bank operations, especially in light of the failures Bear Stearns, Lehman Brothers or Northern Rock. This paper deals with the economic capital allocation of global banks. In theory, economic capital should reflect the real risks of a bank and should be publicly available. Yet, as discovered during the global financial crisis, even when economic capital information was publicly disclosed, the underlying assumptions rendered the information useless. Specifically, some global banks that reported relatively high levels of economic capital before the crisis went bankrupt or had to be bailed-out by their government. And, only 15 out of 50 global banks reported their economic capital during the 2007-2010 period. In this paper, we analyze the changes in reported bank economic capital disclosure during this period. We conclude that relative shares of credit and business risks increased in 2010 compared to 2007, while both operational and market risks decreased their shares on the total economic capital of top-rated global banks. Generally speaking, higher levels of disclosure and transparency of bank operations are required to obtain more confidence from stakeholders. Moreover, additional risks such as liquidity risks should be included in these disclosures.

Keywords: global crisis, economic capital, risk management, risk allocation, bank

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2976
1032 Asynchronous Parallel Distributed Genetic Algorithm with Elite Migration

Authors: Kazunori Kojima, Masaaki Ishigame, Goutam Chakraborty, Hiroshi Hatsuo, Shozo Makino

Abstract:

In most of the popular implementation of Parallel GAs the whole population is divided into a set of subpopulations, each subpopulation executes GA independently and some individuals are migrated at fixed intervals on a ring topology. In these studies, the migrations usually occur 'synchronously' among subpopulations. Therefore, CPUs are not used efficiently and the communication do not occur efficiently either. A few studies tried asynchronous migration but it is hard to implement and setting proper parameter values is difficult. The aim of our research is to develop a migration method which is easy to implement, which is easy to set parameter values, and which reduces communication traffic. In this paper, we propose a traffic reduction method for the Asynchronous Parallel Distributed GA by migration of elites only. This is a Server-Client model. Every client executes GA on a subpopulation and sends an elite information to the server. The server manages the elite information of each client and the migrations occur according to the evolution of sub-population in a client. This facilitates the reduction in communication traffic. To evaluate our proposed model, we apply it to many function optimization problems. We confirm that our proposed method performs as well as current methods, the communication traffic is less, and setting of the parameters are much easier.

Keywords: Parallel Distributed Genetic Algorithm (PDGA), asynchronousPDGA, Server-Client configuration, Elite Migration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
1031 Processing the Medical Sensors Signals Using Fuzzy Inference System

Authors: S. Bouharati, I. Bouharati, C. Benzidane, F. Alleg, M. Belmahdi

Abstract:

Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.

Keywords: Sensors, Sensivity, fuzzy logic, analysis, physiological parameters, medical diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
1030 Information and Communication Technologies in Collaboration Projects via the Internet

Authors: Murat Öztok, Nesrin Özdener

Abstract:

The aim of this study is to determine the basic information and communication technology (ICT) skills that may be needed by students studying in the 8th grade of the primary education in their cooperative project works implemented via the Internet. Within the scope of the study, the curriculum used for European Computer Driving License (ECDL) and the curriculum used in Turkey are also compared in terms of the ability to use ICT, which is aimed to be provided to the students. The research population of the study, during which the pre test – post test control group experimental model was used, consisted of 40 students from three different schools. In the first stage of the study, the skills that might be needed by students for their cooperative project works implemented via the Internet were determined through examination of the completed Comenious, e – twinning and WorldLinks projects. In the second stage of the study, the curriculums of the Turkish Ministry of National Education (MEB) and ECDL were evaluated by seven different teachers in line with these skills. Also in this study the ECDL and MEB curriculums were compared in terms of capability to provide the skills to implement cooperative projects via the Internet. In line with the findings of the study, the skills that might be needed by students to implement cooperative projects via the Internet were outlined, and existence of a significant difference was established in favor of the ECDL curriculum upon comparison of both curriculums in accordance with this outline (U = 50,500; p <0,05). The findings of the study also suggested that the students had considerable deficiencies in implementation of cooperative projects via the Internet without the ICT infrastructure.

Keywords: Collaboration Projects, Comenius, Curriculum, ICT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
1029 Using Data Mining Techniques for Finding Cardiac Outlier Patients

Authors: Farhan Ismaeel Dakheel, Raoof Smko, K. Negrat, Abdelsalam Almarimi

Abstract:

In this paper we used data mining techniques to identify outlier patients who are using large amount of drugs over a long period of time. Any healthcare or health insurance system should deal with the quantities of drugs utilized by chronic diseases patients. In Kingdom of Bahrain, about 20% of health budget is spent on medications. For the managers of healthcare systems, there is no enough information about the ways of drug utilization by chronic diseases patients, is there any misuse or is there outliers patients. In this work, which has been done in cooperation with information department in the Bahrain Defence Force hospital; we select the data for Cardiac patients in the period starting from 1/1/2008 to December 31/12/2008 to be the data for the model in this paper. We used three techniques for finding the drug utilization for cardiac patients. First we applied a clustering technique, followed by measuring of clustering validity, and finally we applied a decision tree as classification algorithm. The clustering results is divided into three clusters according to the drug utilization, for 1603 patients, who received 15,806 prescriptions during this period can be partitioned into three groups, where 23 patients (2.59%) who received 1316 prescriptions (8.32%) are classified to be outliers. The classification algorithm shows that the use of average drug utilization and the age, and the gender of the patient can be considered to be the main predictive factors in the induced model.

Keywords: Data Mining, Clustering, Classification, Drug Utilization..

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
1028 A Game-Theoretic Approach to Hedonic Housing Prices

Authors: Cielito F. Habito, Michael O. Santos, Andres G. Victorio

Abstract:

A property-s selling price is described as the result of sequential bargaining between a buyer and a seller in an environment of asymmetric information. Hedonic housing prices are estimated based upon 17,333 records of New Zealand residential properties sold during the years 2006 and 2007.

Keywords: Housing demand, hedonics and valuation, residentialmarkets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
1027 Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications

Authors: Seddeq E. Ghrare, Salahaddin M. Shreef

Abstract:

Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.

Keywords: Medical Image, Magnetic Resonance Imaging, Image Compression, Discrete Wavelet Transform, Telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2977
1026 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
1025 Video Matting based on Background Estimation

Authors: J.-H. Moon, D.-O Kim, R.-H. Park

Abstract:

This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.

Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
1024 Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems

Authors: Kai Häussermann, Christoph Hubig, Paul Levi, Frank Leymann, Oliver Siemoneit, Matthias Wieland, Oliver Zweigle

Abstract:

Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.

Keywords: context-awareness, ethics, facilitation of system use through workflows, situation recognition and learning based on situation templates and situation ontology's, theory of situationaware systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1023 Context Aware Lightweight Energy Efficient Framework

Authors: D. Sathan, A. Meetoo, R. K. Subramaniam

Abstract:

Context awareness is a capability whereby mobile computing devices can sense their physical environment and adapt their behavior accordingly. The term context-awareness, in ubiquitous computing, was introduced by Schilit in 1994 and has become one of the most exciting concepts in early 21st-century computing, fueled by recent developments in pervasive computing (i.e. mobile and ubiquitous computing). These include computing devices worn by users, embedded devices, smart appliances, sensors surrounding users and a variety of wireless networking technologies. Context-aware applications use context information to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. For example: A context aware mobile phone will know that the user is currently in a meeting room, and reject any unimportant calls. One of the major challenges in providing users with context-aware services lies in continuously monitoring their contexts based on numerous sensors connected to the context aware system through wireless communication. A number of context aware frameworks based on sensors have been proposed, but many of them have neglected the fact that monitoring with sensors imposes heavy workloads on ubiquitous devices with limited computing power and battery. In this paper, we present CALEEF, a lightweight and energy efficient context aware framework for resource limited ubiquitous devices.

Keywords: Context-Aware, Energy-Efficient, Lightweight, Ubiquitous Devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
1022 Behavioral Analysis of Team Members in Virtual Organization based on Trust Dimension and Learning

Authors: Indiramma M., K. R. Anandakumar

Abstract:

Trust management and Reputation models are becoming integral part of Internet based applications such as CSCW, E-commerce and Grid Computing. Also the trust dimension is a significant social structure and key to social relations within a collaborative community. Collaborative Decision Making (CDM) is a difficult task in the context of distributed environment (information across different geographical locations) and multidisciplinary decisions are involved such as Virtual Organization (VO). To aid team decision making in VO, Decision Support System and social network analysis approaches are integrated. In such situations social learning helps an organization in terms of relationship, team formation, partner selection etc. In this paper we focus on trust learning. Trust learning is an important activity in terms of information exchange, negotiation, collaboration and trust assessment for cooperation among virtual team members. In this paper we have proposed a reinforcement learning which enhances the trust decision making capability of interacting agents during collaboration in problem solving activity. Trust computational model with learning that we present is adapted for best alternate selection of new project in the organization. We verify our model in a multi-agent simulation where the agents in the community learn to identify trustworthy members, inconsistent behavior and conflicting behavior of agents.

Keywords: Collaborative Decision making, Trust, Multi Agent System (MAS), Bayesian Network, Reinforcement Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
1021 A Simplified and Effective Algorithm Used to Mine Similar Processes: An Illustrated Example

Authors: Min-Hsun Kuo, Yun-Shiow Chen

Abstract:

The running logs of a process hold valuable information about its executed activity behavior and generated activity logic structure. Theses informative logs can be extracted, analyzed and utilized to improve the efficiencies of the process's execution and conduction. One of the techniques used to accomplish the process improvement is called as process mining. To mine similar processes is such an improvement mission in process mining. Rather than directly mining similar processes using a single comparing coefficient or a complicate fitness function, this paper presents a simplified heuristic process mining algorithm with two similarity comparisons that are able to relatively conform the activity logic sequences (traces) of mining processes with those of a normalized (regularized) one. The relative process conformance is to find which of the mining processes match the required activity sequences and relationships, further for necessary and sufficient applications of the mined processes to process improvements. One similarity presented is defined by the relationships in terms of the number of similar activity sequences existing in different processes; another similarity expresses the degree of the similar (identical) activity sequences among the conforming processes. Since these two similarities are with respect to certain typical behavior (activity sequences) occurred in an entire process, the common problems, such as the inappropriateness of an absolute comparison and the incapability of an intrinsic information elicitation, which are often appeared in other process conforming techniques, can be solved by the relative process comparison presented in this paper. To demonstrate the potentiality of the proposed algorithm, a numerical example is illustrated.

Keywords: process mining, process similarity, artificial intelligence, process conformance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
1020 Highlighting Document's Structure

Authors: Sylvie Ratté, Wilfried Njomgue, Pierre-André Ménard

Abstract:

In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).

Keywords: Information retrieval, document structures, symbolic grammars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227
1019 Empirical Evidence on Equity Valuation of Thai Firms

Authors: Somchai Supattarakul, Anya Khanthavit

Abstract:

This study aims at providing empirical evidence on a comparison of two equity valuation models: (1) the dividend discount model (DDM) and (2) the residual income model (RIM), in estimating equity values of Thai firms during 1995-2004. Results suggest that DDM and RIM underestimate equity values of Thai firms and that RIM outperforms DDM in predicting cross-sectional stock prices. Results on regression of cross-sectional stock prices on the decomposed DDM and RIM equity values indicate that book value of equity provides the greatest incremental explanatory power, relative to other components in DDM and RIM terminal values, suggesting that book value distortions resulting from accounting procedures and choices are less severe than forecast and measurement errors in discount rates and growth rates. We also document that the incremental explanatory power of book value of equity during 1998-2004, representing the information environment under Thai Accounting Standards reformed after the 1997 economic crisis to conform to International Accounting Standards, is significantly greater than that during 1995-1996, representing the information environment under the pre-reformed Thai Accounting Standards. This implies that the book value distortions are less severe under the 1997 Reformed Thai Accounting Standards than the pre-reformed Thai Accounting Standards.

Keywords: Dividend Discount Model, Equity Valuation Model, Residual Income Model, Thai Stock Market

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
1018 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
1017 Qualification and Provisioning of xDSL Broadband Lines using a GIS Approach

Authors: Mavroidis Athanasios, Karamitsos Ioannis, Saletti Paola

Abstract:

In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.

Keywords: GIS loop qualification, GIS xDSL, LLU TIN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
1016 Interoperable CNC System for Turning Operations

Authors: Yusri Yusof, Stephen Newman, Aydin Nassehi, Keith Case

Abstract:

The changing economic climate has made global manufacturing a growing reality over the last decade, forcing companies from east and west and all over the world to collaborate beyond geographic boundaries in the design, manufacture and assemble of products. The ISO10303 and ISO14649 Standards (STEP and STEP-NC) have been developed to introduce interoperability into manufacturing enterprises so as to meet the challenge of responding to production on demand. This paper describes and illustrates a STEP compliant CAD/CAPP/CAM System for the manufacture of rotational parts on CNC turning centers. The information models to support the proposed system together with the data models defined in the ISO14649 standard used to create the NC programs are also described. A structured view of a STEP compliant CAD/CAPP/CAM system framework supporting the next generation of intelligent CNC controllers for turn/mill component manufacture is provided. Finally a proposed computational environment for a STEP-NC compliant system for turning operations (SCSTO) is described. SCSTO is the experimental part of the research supported by the specification of information models and constructed using a structured methodology and object-oriented methods. SCSTO was developed to generate a Part 21 file based on machining features to support the interactive generation of process plans utilizing feature extraction. A case study component has been developed to prove the concept for using the milling and turning parts of ISO14649 to provide a turn-mill CAD/CAPP/CAM environment.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
1015 Cascaded ANN for Evaluation of Frequency and Air-gap Voltage of Self-Excited Induction Generator

Authors: Raja Singh Khela, R. K. Bansal, K. S. Sandhu, A. K. Goel

Abstract:

Self-Excited Induction Generator (SEIG) builds up voltage while it enters in its magnetic saturation region. Due to non-linear magnetic characteristics, the performance analysis of SEIG involves cumbersome mathematical computations. The dependence of air-gap voltage on saturated magnetizing reactance can only be established at rated frequency by conducting a laboratory test commonly known as synchronous run test. But, there is no laboratory method to determine saturated magnetizing reactance and air-gap voltage of SEIG at varying speed, terminal capacitance and other loading conditions. For overall analysis of SEIG, prior information of magnetizing reactance, generated frequency and air-gap voltage is essentially required. Thus, analytical methods are the only alternative to determine these variables. Non-existence of direct mathematical relationship of these variables for different terminal conditions has forced the researchers to evolve new computational techniques. Artificial Neural Networks (ANNs) are very useful for solution of such complex problems, as they do not require any a priori information about the system. In this paper, an attempt is made to use cascaded neural networks to first determine the generated frequency and magnetizing reactance with varying terminal conditions and then air-gap voltage of SEIG. The results obtained from the ANN model are used to evaluate the overall performance of SEIG and are found to be in good agreement with experimental results. Hence, it is concluded that analysis of SEIG can be carried out effectively using ANNs.

Keywords: Self-Excited Induction Generator, Artificial NeuralNetworks, Exciting Capacitance and Saturated magnetizingreactance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
1014 Agent-Based Simulation and Analysis of Network-Centric Air Defense Missile Systems

Authors: Su-Yan Tang, Wei Zhang, Shan Mei, Yi-Fan Zhu

Abstract:

Network-Centric Air Defense Missile Systems (NCADMS) represents the superior development of the air defense missile systems and has been regarded as one of the major research issues in military domain at present. Due to lack of knowledge and experience on NCADMS, modeling and simulation becomes an effective approach to perform operational analysis, compared with those equation based ones. However, the complex dynamic interactions among entities and flexible architectures of NCADMS put forward new requirements and challenges to the simulation framework and models. ABS (Agent-Based Simulations) explicitly addresses modeling behaviors of heterogeneous individuals. Agents have capability to sense and understand things, make decisions, and act on the environment. They can also cooperate with others dynamically to perform the tasks assigned to them. ABS proves an effective approach to explore the new operational characteristics emerging in NCADMS. In this paper, based on the analysis of network-centric architecture and new cooperative engagement strategies for NCADMS, an agent-based simulation framework by expanding the simulation framework in the so-called System Effectiveness Analysis Simulation (SEAS) was designed. The simulation framework specifies components, relationships and interactions between them, the structure and behavior rules of an agent in NCADMS. Based on scenario simulations, information and decision superiority and operational advantages in NCADMS were analyzed; meanwhile some suggestions were provided for its future development.

Keywords: air defense missile systems, network-centric, agent-based simulation, simulation framework, information superiority, decision superiority, operational advantages

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2289
1013 An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-

Authors: Hiroki Satou, Kayoko Yamamoto

Abstract:

This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.

Keywords: Emissions Trading, Tokyo Cap and Trade Program, Carbon Dioxide (CO2), Global Warming, Geographic Information Systems (GIS)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172