Search results for: Video indexing and retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 665

Search results for: Video indexing and retrieval

455 The Impact of Video Games in Children-s Learning of Mathematics

Authors: Muhammad Ridhuan Tony Lim Abdullah, Zulqarnain Abu Bakar, Razol Mahari Ali, Ibrahima Faye, Hilmi Hasan

Abstract:

This paper describes a research project on Year 3 primary school students in Malaysia in their use of computer-based video game to enhance learning of multiplication facts (tables) in the Mathematics subject. This study attempts to investigate whether video games could actually contribute to positive effect on children-s learning or otherwise. In conducting this study, the researchers assume a neutral stand in the investigation as an unbiased outcome of the study would render reliable response to the impact of video games in education which would contribute to the literature of technology-based education as well as impact to the pedagogical aspect of formal education. In order to conduct the study, a subject (Mathematics) with a specific topic area in the subject (multiplication facts) is chosen. The study adopts a causal-comparative research to investigate the impact of the inclusion of a computer-based video game designed to teach multiplication facts to primary level students. Sample size is 100 students divided into two i.e., A: conventional group and B conventional group aided by video games. The conventional group (A) would be taught multiplication facts (timetables) and skills conventionally. The other group (B) underwent the same lessons but with supplementary activity: a computer-based video game on multiplication which is called Timez-Attack. Analysis of marks accrued from pre-test will be compared to post- test using comparisons of means, t tests, and ANOVA tests to investigate the impact of computer games as an added learning activity. The findings revealed that video games as a supplementary activity to classroom learning brings significant and positive effect on students- retention and mastery of multiplication tables as compared to students who rely only upon formal classroom instructions.

Keywords: Technology for education, Gaming for education, Computer-based video games, Cognitive learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4201
454 Automatic Building an Extensive Arabic FA Terms Dictionary

Authors: El-Sayed Atlam, Masao Fuketa, Kazuhiro Morita, Jun-ichi Aoe

Abstract:

Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.

Keywords: Arabic Field Association Terms, information extraction, document classification, information retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
453 Path Planning of a Robot Manipulator using Retrieval RRT Strategy

Authors: K. Oh, J. P. Hwang, E. Kim, H. Lee

Abstract:

This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS) combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.

Keywords: Path planning, RRT, 6 DOF manipulator, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2481
452 Factorial Design Analysis for Quality of Video on MANET

Authors: Hyoup-Sang Yoon

Abstract:

The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyze MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric.

Keywords: Evalvid, full factorial design, mobile ad hoc networks, ns-2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
451 Knowledge Representation and Retrieval in Design Project Memory

Authors: Smain M. Bekhti, Nada T. Matta

Abstract:

Knowledge sharing in general and the contextual access to knowledge in particular, still represent a key challenge in the knowledge management framework. Researchers on semantic web and human machine interface study techniques to enhance this access. For instance, in semantic web, the information retrieval is based on domain ontology. In human machine interface, keeping track of user's activity provides some elements of the context that can guide the access to information. We suggest an approach based on these two key guidelines, whilst avoiding some of their weaknesses. The approach permits a representation of both the context and the design rationale of a project for an efficient access to knowledge. In fact, the method consists of an information retrieval environment that, in the one hand, can infer knowledge, modeled as a semantic network, and on the other hand, is based on the context and the objectives of a specific activity (the design). The environment we defined can also be used to gather similar project elements in order to build classifications of tasks, problems, arguments, etc. produced in a company. These classifications can show the evolution of design strategies in the company.

Keywords: Project Memory, Knowledge re-use, Design rationale, Knowledge representation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
450 Study of Measures to Secure Video Phone Service Safety through a Preliminary Evaluationof the Information Security of the New IT Service

Authors: DongHoon Shin, Yunmook Nah, HoSeong Kim, Gang Shin Lee, Jae-Il Lee

Abstract:

The rapid advance of communication technology is evolving the network environment into the broadband convergence network. Likewise, the IT services operated in the individual network are also being quickly converged in the broadband convergence network environment. VoIP and IPTV are two examples of such new services. Efforts are being made to develop the video phone service, which is an advanced form of the voice-oriented VoIP service. However, the new IT services will be subject to stability and reliability vulnerabilities if the relevant security issues are not answered during the convergence of the existing IT services currently being operated in individual networks within the wider broadband network environment. To resolve such problems, this paper attempts to analyze the possible threats and identify the necessary security measures before the deployment of the new IT services. Furthermore, it measures the quality of the encryption algorithm application example to describe the appropriate algorithm in order to present security technology that will have no negative impact on the quality of the video phone service.

Keywords: BcN, Security Measures, Video Phone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
449 A Broadcasting Strategy for Interactive Video-on-Demand Services

Authors: Yu-Wei Chen, Li-Ren Han

Abstract:

In this paper, we employ the approach of linear programming to propose a new interactive broadcast method. In our method, a film S is divided into n equal parts and broadcast via k channels. The user simultaneously downloads these segments from k channels into the user-s set-top-box (STB) and plays them in order. Our method assumes that the initial p segments will not have fast-forwarding capabilities. Every time the user wants to initiate d times fast-forwarding, according to our broadcasting strategy, the necessary segments already saved in the user-s STB or are just download on time for playing. The proposed broadcasting strategy not only allows the user to pause and rewind, but also to fast-forward.

Keywords: Broadcasting, Near Video-on-Demand (VOD), Linear Programming, Video-Cassette-Recorder (VCR) Functions, Waiting Time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
448 FPGA Implementation of a Vision-Based Blind Spot Warning System

Authors: Yu Ren Lin, Yu Hong Li

Abstract:

Vision-based intelligent vehicle applications often require large amounts of memory to handle video streaming and image processing, which in turn increases complexity of hardware and software. This paper presents an FPGA implement of a vision-based blind spot warning system. Using video frames, the information of the blind spot area turns into one-dimensional information. Analysis of the estimated entropy of image allows the detection of an object in time. This idea has been implemented in the XtremeDSP video starter kit. The blind spot warning system uses only 13% of its logic resources and 95k bits block memory, and its frame rate is over 30 frames per sec (fps).

Keywords: blind-spot area, image, FPGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
447 Object Tracking System Using Camshift, Meanshift and Kalman Filter

Authors: Afef Salhi, Ameni Yengui Jammaoussi

Abstract:

This paper presents a implementation of an object tracking system in a video sequence. This object tracking is an important task in many vision applications. The main steps in video analysis are two: detection of interesting moving objects and tracking of such objects from frame to frame. In a similar vein, most tracking algorithms use pre-specified methods for preprocessing. In our work, we have implemented several object tracking algorithms (Meanshift, Camshift, Kalman filter) with different preprocessing methods. Then, we have evaluated the performance of these algorithms for different video sequences. The obtained results have shown good performances according to the degree of applicability and evaluation criteria.

Keywords: Tracking, meanshift, camshift, Kalman filter, evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8203
446 A Text Mining Technique Using Association Rules Extraction

Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey

Abstract:

This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.

Keywords: Text mining, data mining, association rule mining

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4389
445 Implementation of a Motion Detection System

Authors: Asif Ansari, T.C.Manjunath, C. Ardil

Abstract:

In today-s competitive environment, the security concerns have grown tremendously. In the modern world, possession is known to be 9/10-ths of the law. Hence, it is imperative for one to be able to safeguard one-s property from worldly harms such as thefts, destruction of property, people with malicious intent etc. Due to the advent of technology in the modern world, the methodologies used by thieves and robbers for stealing have been improving exponentially. Therefore, it is necessary for the surveillance techniques to also improve with the changing world. With the improvement in mass media and various forms of communication, it is now possible to monitor and control the environment to the advantage of the owners of the property. The latest technologies used in the fight against thefts and destruction are the video surveillance and monitoring. By using the technologies, it is possible to monitor and capture every inch and second of the area in interest. However, so far the technologies used are passive in nature, i.e., the monitoring systems only help in detecting the crime but do not actively participate in stopping or curbing the crime while it takes place. Therefore, we have developed a methodology to detect the motion in a video stream environment and this is an idea to ensure that the monitoring systems not only actively participate in stopping the crime, but do so while the crime is taking place. Hence, a system is used to detect any motion in a live streaming video and once motion has been detected in the live stream, the software will activate a warning system and capture the live streaming video.

Keywords: Motion, Detection, System, Video, Crime, Matlab, Surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4249
444 Developing an Online Library for Faster Retrieval of Mold Base and Standard Parts of Injection Molding

Authors: Alan C. Lin, Ricky N. Joevan

Abstract:

This paper focuses on developing a system to transfer mold base plates and standard parts faster during the stage of injection mold design. This system not only provides a way to compare the file version, but also it utilizes Siemens NX 10 to isolate the updated information into a single executable file (.dll), and then, the file can be transferred without the need of transferring the whole file. By this way, the system can help the user to download only necessary mold base plates and standard parts, and those parts downloaded are only the updated portions.

Keywords: CAD, injection molding, mold base, data retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136
443 Subjective Assessment about Super Resolution Image Resolution

Authors: Seiichi Gohshi, Hiroyuki Sekiguchi, Yoshiyasu Shimizu, Takeshi Ikenaga

Abstract:

Super resolution (SR) technologies are now being applied to video to improve resolution. Some TV sets are now equipped with SR functions. However, it is not known if super resolution image reconstruction (SRR) for TV really works or not. Super resolution with non-linear signal processing (SRNL) has recently been proposed. SRR and SRNL are the only methods for processing video signals in real time. The results from subjective assessments of SSR and SRNL are described in this paper. SRR video was produced in simulations with quarter precision motion vectors and 100 iterations. These are ideal conditions for SRR. We found that the image quality of SRNL is better than that of SRR even though SRR was processed under ideal conditions.

Keywords: Super Resolution Image Reconstruction, Super Resolution with Non-Linear Signal Processing, Subjective Assessment, Image Quality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
442 FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes

Authors: M. K. Bhuyan

Abstract:

The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.

Keywords: Hand gesture, MPEG-4, Hausdorff distance, finite state machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993
441 A Comparative Analysis of Different Web Content Mining Tools

Authors: T. Suresh Kumar, M. Arthanari, N. Shanthi

Abstract:

Nowadays, the Web has become one of the most pervasive platforms for information change and retrieval. It collects the suitable and perfectly fitting information from websites that one requires. Data mining is the form of extracting data’s available in the internet. Web mining is one of the elements of data mining Technique, which relates to various research communities such as information recovery, folder managing system and simulated intellects. In this Paper we have discussed the concepts of Web mining. We contain generally focused on one of the categories of Web mining, specifically the Web Content Mining and its various farm duties. The mining tools are imperative to scanning the many images, text, and HTML documents and then, the result is used by the various search engines. We conclude by presenting a comparative table of these tools based on some pertinent criteria.

Keywords: Data Mining, Web Mining, Web Content Mining, Mining Tools, Information retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3511
440 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). 

Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
439 Information Retrieval in the Semantic LIFE Personal Digital Memory Framework

Authors: Hanh Huu Hoang, Tho Manh Nguyen

Abstract:

Ever increasing capacities of contemporary storage devices inspire the vision to accumulate (personal) information without the need of deleting old data over a long time-span. Hence the target of SemanticLIFE project is to create a Personal Information Management system for a human lifetime data. One of the most important characteristics of the system is its dedication to retrieve information in a very efficient way. By adopting user demands regarding the reduction of ambiguities, our approach aims at a user-oriented and yet powerful enough system with a satisfactory query performance. We introduce the query system of SemanticLIFE, the Virtual Query System, which uses emerging Semantic Web technologies to fulfill users- requirements.

Keywords: Ontology-based Information Retrieval, Digital Memories, SemanticLIFE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
438 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2575
437 Evaluation of Classifiers Based On I2C Distance for Action Recognition

Authors: Lei Zhang, Tao Wang, Xiantong Zhen

Abstract:

Naive Bayes Nearest Neighbor (NBNN) and its variants, i,e., local NBNN and the NBNN kernels, are local feature-based classifiers that have achieved impressive performance in image classification. By exploiting instance-to-class (I2C) distances (instance means image/video in image/video classification), they avoid quantization errors of local image descriptors in the bag of words (BoW) model. However, the performances of NBNN, local NBNN and the NBNN kernels have not been validated on video analysis. In this paper, we introduce these three classifiers into human action recognition and conduct comprehensive experiments on the benchmark KTH and the realistic HMDB datasets. The results shows that those I2C based classifiers consistently outperform the SVM classifier with the BoW model.

Keywords: Instance-to-class distance, NBNN, Local NBNN, NBNN kernel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
436 Utilizing Ontologies Using Ontology Editor for Creating Initial Unified Modeling Language (UML)Object Model

Authors: Waralak Vongdoiwang Siricharoen

Abstract:

One of object oriented software developing problem is the difficulty of searching the appropriate and suitable objects for starting the system. In this work, ontologies appear in the part of supporting the object discovering in the initial of object oriented software developing. There are many researches try to demonstrate that there is a great potential between object model and ontologies. Constructing ontology from object model is called ontology engineering can be done; On the other hand, this research is aiming to support the idea of building object model from ontology is also promising and practical. Ontology classes are available online in any specific areas, which can be searched by semantic search engine. There are also many helping tools to do so; one of them which are used in this research is Protégé ontology editor and Visual Paradigm. To put them together give a great outcome. This research will be shown how it works efficiently with the real case study by using ontology classes in travel/tourism domain area. It needs to combine classes, properties, and relationships from more than two ontologies in order to generate the object model. In this paper presents a simple methodology framework which explains the process of discovering objects. The results show that this framework has great value while there is possible for expansion. Reusing of existing ontologies offers a much cheaper alternative than building new ones from scratch. More ontologies are becoming available on the web, and online ontologies libraries for storing and indexing ontologies are increasing in number and demand. Semantic and Ontologies search engines have also started to appear, to facilitate search and retrieval of online ontologies.

Keywords: Software Developing, Ontology, Ontology Library, Artificial Intelligent, Protégé, Object Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
435 Video Data Mining based on Information Fusion for Tamper Detection

Authors: Girija Chetty, Renuka Biswas

Abstract:

In this paper, we propose novel algorithmic models based on information fusion and feature transformation in crossmodal subspace for different types of residue features extracted from several intra-frame and inter-frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features – the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace.

Keywords: image tamper detection, digital forensics, correlation features image fusion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
434 Design of a Computer Vision Based Exercise Video Game for Senior Citizens

Authors: June Tay, Ivy Chia

Abstract:

There are numerous changes, both mental and physical, taking place when people age. We need to understand the different aspects required for healthy living, including meeting nutritional needs, regular physical activities to keep agility, sufficient rest and sleep to have physical and mental well-being, social engagement to avoid the risk of social isolation and depression, and access to healthcare to detect and manage chronic conditions. Promoting physical activities for an ageing population is necessary as many may have enjoyed sedentary lifestyles for some time. In our study, we evaluate the considerations when designing a computer vision video game for the elderly. We need to design some low-impact activities, such as stretching and gentle movements, because some elderly individuals may have joint pains or mobility issues. The exercise game should consist of simple movements that are easy to follow and remember. It should be fun and enjoyable so that they can be motivated to do some exercise. Social engagement can keep the elderly motivated and competitive, and they are more willing to engage in game exercises. Elderly citizens can compare their game scores and try to improve them. We propose a computer vision-based video game for the elderly that will capture and track the movement of the elderly hand pushing a ball on the screen into a circle. It can be easily set up using a PC laptop with a webcam. Our video game adhered to the design framework we employed, and it encompassed ease of use, a simple graphical interface, easy-to-play game exercise, and fun gameplay.

Keywords: Computer vision, video games, gerontology technology, caregiving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 100
433 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
432 New VLSI Architecture for Motion Estimation Algorithm

Authors: V. S. K. Reddy, S. Sengupta, Y. M. Latha

Abstract:

This paper presents an efficient VLSI architecture design to achieve real time video processing using Full-Search Block Matching (FSBM) algorithm. The design employs parallel bank architecture with minimum latency, maximum throughput, and full hardware utilization. We use nine parallel processors in our architecture and each controlled by a state machine. State machine control implementation makes the design very simple and cost effective. The design is implemented using VHDL and the programming techniques we incorporated makes the design completely programmable in the sense that the search ranges and the block sizes can be varied to suit any given requirements. The design can operate at frequencies up to 36 MHz and it can function in QCIF and CIF video resolution at 1.46 MHz and 5.86 MHz, respectively.

Keywords: Video Coding, Motion Estimation, Full-Search, Block-Matching, VLSI Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
431 Application of a Novel Audio Compression Scheme in Automatic Music Recommendation, Digital Rights Management and Audio Fingerprinting

Authors: Anindya Roy, Goutam Saha

Abstract:

Rapid progress in audio compression technology has contributed to the explosive growth of music available in digital form today. In a reversal of ideas, this work makes use of a recently proposed efficient audio compression scheme to develop three important applications in the context of Music Information Retrieval (MIR) for the effective manipulation of large music databases, namely automatic music recommendation (AMR), digital rights management (DRM) and audio finger-printing for song identification. The performance of these three applications has been evaluated with respect to a database of songs collected from a diverse set of genres.

Keywords: Audio compression, Music Information Retrieval, Digital Rights Management, Audio Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
430 Soft Computing based Retrieval System for Medical Applications

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

With increasing data in medical databases, medical data retrieval is growing in popularity. Some of this analysis including inducing propositional rules from databases using many soft techniques, and then using these rules in an expert system. Diagnostic rules and information on features are extracted from clinical databases on diseases of congenital anomaly. This paper explain the latest soft computing techniques and some of the adaptive techniques encompasses an extensive group of methods that have been applied in the medical domain and that are used for the discovery of data dependencies, importance of features, patterns in sample data, and feature space dimensionality reduction. These approaches pave the way for new and interesting avenues of research in medical imaging and represent an important challenge for researchers.

Keywords: CBIR, GA, Rough sets, CBMIR, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
429 Video Classification by Partitioned Frequency Spectra of Repeating Movements

Authors: Kahraman Ayyildiz, Stefan Conrad

Abstract:

In this paper we present a system for classifying videos by frequency spectra. Many videos contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Motion of these areas usually repeats with a certain main frequency and several side frequencies. Transforming repeating motion to its frequency domain via FFT reveals these frequencies. Average amplitudes of frequency intervals can be seen as features of cyclic motion. Hence determining these features can help to classify videos with repeating movements. In this paper we explain how to compute frequency spectra for video clips and how to use them for classifying. Our approach utilizes series of image moments as a function. This function again is transformed into its frequency domain.

Keywords: action recognition, frequency feature, motion recognition, repeating movement, video classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
428 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.

Keywords: Distortion, metrics, recording, frame rate, video quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 304
427 Exploring Performance-Based Music Attributes for Stylometric Analysis

Authors: Abdellghani Bellaachia, Edward Jimenez

Abstract:

Music Information Retrieval (MIR) and modern data mining techniques are applied to identify style markers in midi music for stylometric analysis and author attribution. Over 100 attributes are extracted from a library of 2830 songs then mined using supervised learning data mining techniques. Two attributes are identified that provide high informational gain. These attributes are then used as style markers to predict authorship. Using these style markers the authors are able to correctly distinguish songs written by the Beatles from those that were not with a precision and accuracy of over 98 per cent. The identification of these style markers as well as the architecture for this research provides a foundation for future research in musical stylometry.

Keywords: Music Information Retrieval, Music Data Mining, Stylometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
426 Non-Overlapping Hierarchical Index Structure for Similarity Search

Authors: Mounira Taileb, Sid Lamrous, Sami Touati

Abstract:

In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.

Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530