Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8

video Related Publications

8 Key Frames Extraction for Sign Language Video Analysis and Recognition

Authors: Jaroslav Polec, Petra Heribanová, Tomáš Hirner

Abstract:

In this paper we proposed a method for finding video frames representing one sign in the finger alphabet. The method is based on determining hands location, segmentation and the use of standard video quality evaluation metrics. Metric calculation is performed only in regions of interest. Sliding mechanism for finding local extrema and adaptive threshold based on local averaging is used for key frames selection. The success rate is evaluated by recall, precision and F1 measure. The method effectiveness is compared with metrics applied to all frames. Proposed method is fast, effective and relatively easy to realize by simple input video preprocessing and subsequent use of tools designed for video quality measuring.

Keywords: video, Quality, Sign Language, MSE, metric, key frame, MSAD, SSIM, VQM, finger alphabet

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
7 Application of Tacit Knowledge from Professional Packaging Designer for Teaching Packaging Design

Authors: Somsri Binraman, Boonliang Kaewnapan, Krittika Tanprasert

Abstract:

In the package design industry, there are a lot of tacit knowledge resided within each designer. The objectives are to capture them and compile it to be used as a teaching resource and to create a video clip of package design process as well as to evaluate its quality and learning effectiveness. Interview were used as a technique for capturing knowledge in brand design concept, differentiation, recognition, rank of recognition factor, consumer survey, knowledge about marketing, research, graphic design, the effect of color, and law and regulation. Video clip about package design were created. The clip consisted of both the speech and clip of actual process. The quality of the video in term of media was ranked as good while the content was ranked as excellent. The students- score on post-test was significantly greater than that of pretest (p>0.001).

Keywords: Design, packaging, video, interview, tacit knowledge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123
6 SIFT Accordion: A Space-Time Descriptor Applied to Human Action Recognition

Authors: Olfa.Ben Ahmed, Mahmoud. Mejdoub, Chokri. Ben Amar

Abstract:

Recognizing human action from videos is an active field of research in computer vision and pattern recognition. Human activity recognition has many potential applications such as video surveillance, human machine interaction, sport videos retrieval and robot navigation. Actually, local descriptors and bag of visuals words models achieve state-of-the-art performance for human action recognition. The main challenge in features description is how to represent efficiently the local motion information. Most of the previous works focus on the extension of 2D local descriptors on 3D ones to describe local information around every interest point. In this paper, we propose a new spatio-temporal descriptor based on a spacetime description of moving points. Our description is focused on an Accordion representation of video which is well-suited to recognize human action from 2D local descriptors without the need to 3D extensions. We use the bag of words approach to represent videos. We quantify 2D local descriptor describing both temporal and spatial features with a good compromise between computational complexity and action recognition rates. We have reached impressive results on publicly available action data set

Keywords: video, Motion, SIFT, Human action, Accordion, Bag of Features, Moving point, Space-Time Descriptor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
5 Intelligibility of Cued Speech in Video

Authors: J. Polec, P. Heribanová, S. Ondrušová, M. Hosťovecký

Abstract:

This paper discusses the cued speech recognition methods in videoconference. Cued speech is a specific gesture language that is used for communication between deaf people. We define the criteria for sentence intelligibility according to answers of testing subjects (deaf people). In our tests we use 30 sample videos coded by H.264 codec with various bit-rates and various speed of cued speech. Additionally, we define the criteria for consonant sign recognizability in single-handed finger alphabet (dactyl) analogically to acoustics. We use another 12 sample videos coded by H.264 codec with various bit-rates in four different video formats. To interpret the results we apply the standard scale for subjective video quality evaluation and the percentual evaluation of intelligibility as in acoustics. From the results we construct the minimum coded bit-rate recommendations for every spatial resolution.

Keywords: video, cued speech, inteligibility, logatom

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1151
4 Video Quality assessment Measure with a Neural Network

Authors: A. Tamtaoui, H. El Khattabi, D. Aboutajdine

Abstract:

In this paper, we present the video quality measure estimation via a neural network. This latter predicts MOS (mean opinion score) by providing height parameters extracted from original and coded videos. The eight parameters that are used are: the average of DFT differences, the standard deviation of DFT differences, the average of DCT differences, the standard deviation of DCT differences, the variance of energy of color, the luminance Y, the chrominance U and the chrominance V. We chose Euclidean Distance to make comparison between the calculated and estimated output.

Keywords: video, dft, Subjective Quality, DCT, neural network MLP, Retropropagation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
3 Design of FIR Filter for Water Level Detection

Authors: Sakol Udomsiri, Masahiro Iwahashi

Abstract:

This paper proposes a new design of spatial FIR filter to automatically detect water level from a video signal of various river surroundings. A new approach in this report applies "addition" of frames and a "horizontal" edge detector to distinguish water region and land region. Variance of each line of a filtered video frame is used as a feature value. The water level is recognized as a boundary line between the land region and the water region. Edge detection filter essentially demarcates between two distinctly different regions. However, the conventional filters are not automatically adaptive to detect water level in various lighting conditions of river scenery. An optimized filter is purposed so that the system becomes robust to changes of lighting condition. More reliability of the proposed system with the optimized filter is confirmed by accuracy of water level detection.

Keywords: video, Detection, filter, water level

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
2 Implementation of a Motion Detection System

Authors: C. Ardil, Asif Ansari, T.C.Manjunath

Abstract:

In today-s competitive environment, the security concerns have grown tremendously. In the modern world, possession is known to be 9/10-ths of the law. Hence, it is imperative for one to be able to safeguard one-s property from worldly harms such as thefts, destruction of property, people with malicious intent etc. Due to the advent of technology in the modern world, the methodologies used by thieves and robbers for stealing have been improving exponentially. Therefore, it is necessary for the surveillance techniques to also improve with the changing world. With the improvement in mass media and various forms of communication, it is now possible to monitor and control the environment to the advantage of the owners of the property. The latest technologies used in the fight against thefts and destruction are the video surveillance and monitoring. By using the technologies, it is possible to monitor and capture every inch and second of the area in interest. However, so far the technologies used are passive in nature, i.e., the monitoring systems only help in detecting the crime but do not actively participate in stopping or curbing the crime while it takes place. Therefore, we have developed a methodology to detect the motion in a video stream environment and this is an idea to ensure that the monitoring systems not only actively participate in stopping the crime, but do so while the crime is taking place. Hence, a system is used to detect any motion in a live streaming video and once motion has been detected in the live stream, the software will activate a warning system and capture the live streaming video.

Keywords: Crime, System, video, Surveillance, Detection, MATLAB, Motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3835
1 Content and Resources based Mobile and Wireless Video Transcoding

Authors: Ashraf M. A. Ahmad

Abstract:

Delivering streaming video over wireless is an important component of many interactive multimedia applications running on personal wireless handset devices. Such personal devices have to be inexpensive, compact, and lightweight. But wireless channels have a high channel bit error rate and limited bandwidth. Delay variation of packets due to network congestion and the high bit error rate greatly degrades the quality of video at the handheld device. Therefore, mobile access to multimedia contents requires video transcoding functionality at the edge of the mobile network for interworking with heterogeneous networks and services. Therefore, to guarantee quality of service (QoS) delivered to the mobile user, a robust and efficient transcoding scheme should be deployed in mobile multimedia transporting network. Hence, this paper examines the challenges and limitations that the video transcoding schemes in mobile multimedia transporting network face. Then handheld resources, network conditions and content based mobile and wireless video transcoding is proposed to provide high QoS applications. Exceptional performance is demonstrated in the experiment results. These experiments were designed to verify and prove the robustness of the proposed approach. Extensive experiments have been conducted, and the results of various video clips with different bit rate and frame rate have been provided.

Keywords: video, content, Texture, Object Detection, temporal, Transcoding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086