Search results for: video processing
1871 Comparative Evaluation of Color-Based Video Signatures in the Presence of Various Distortion Types
Authors: Aritz Sánchez de la Fuente, Patrick Ndjiki-Nya, Karsten Sühring, Tobias Hinz, Karsten Müller, Thomas Wiegand
Abstract:
The robustness of color-based signatures in the presence of a selection of representative distortions is investigated. Considered are five signatures that have been developed and evaluated within a new modular framework. Two signatures presented in this work are directly derived from histograms gathered from video frames. The other three signatures are based on temporal information by computing difference histograms between adjacent frames. In order to obtain objective and reproducible results, the evaluations are conducted based on several randomly assembled test sets. These test sets are extracted from a video repository that contains a wide range of broadcast content including documentaries, sports, news, movies, etc. Overall, the experimental results show the adequacy of color-histogram-based signatures for video fingerprinting applications and indicate which type of signature should be preferred in the presence of certain distortions.
Keywords: color histograms, robust hashing, video retrieval, video signature
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14461870 Video Quality Assessment using Visual Attention Approach for Sign Language
Authors: Julia Kucerova, Jaroslav Polec, Darina Tarcsiova
Abstract:
Visual information is very important in human perception of surrounding world. Video is one of the most common ways to capture visual information. The video capability has many benefits and can be used in various applications. For the most part, the video information is used to bring entertainment and help to relax, moreover, it can improve the quality of life of deaf people. Visual information is crucial for hearing impaired people, it allows them to communicate personally, using the sign language; some parts of the person being spoken to, are more important than others (e.g. hands, face). Therefore, the information about visually relevant parts of the image, allows us to design objective metric for this specific case. In this paper, we present an example of an objective metric based on human visual attention and detection of salient object in the observed scene.Keywords: sign language, objective video quality, visual attention, saliency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15791869 An Improved Fast Video Clip Search Algorithm for Copy Detection using Histogram-based Features
Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi
Abstract:
In this paper, we present an improved fast and robust search algorithm for copy detection using histogram-based features for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal histogram feature which is robust to color distortion. Furthermore, by Combining with a temporal division method, the spatial and temporal features of the video sequence are integrated to realize fast and robust video search for copy detection. Experimental results show the proposed algorithm can detect the similar video clip more accurately and robust than conventional fast video search algorithm.Keywords: Fast search, Copy detection, Adjacent pixel intensity difference quantization (APIDQ), DC image, Histogram feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14491868 Video-Based Face Recognition Based On State-Space Model
Authors: Cheng-Chieh Chiang, Yi-Chia Chan, Greg C. Lee
Abstract:
This paper proposes a video-based framework for face recognition to identify which faces appear in a video sequence. Our basic idea is like a tracking task - to track a selection of person candidates over time according to the observing visual features of face images in video frames. Hence, we employ the state-space model to formulate video-based face recognition by dividing this problem into two parts: the likelihood and the transition measures. The likelihood measure is to recognize whose face is currently being observed in video frames, for which two-dimensional linear discriminant analysis is employed. The transition measure estimates the probability of changing from an incorrect recognition at the previous stage to the correct person at the current stage. Moreover, extra nodes associated with head nodes are incorporated into our proposed state-space model. The experimental results are also provided to demonstrate the robustness and efficiency of our proposed approach.
Keywords: 2DLDA, face recognition, state-space model, likelihood measure, transition measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16851867 Real-Time Digital Oscilloscope Implementation in 90nm CMOS Technology FPGA
Authors: Nasir Mehmood, Jens Ogniewski, Vinodh Ravinath
Abstract:
This paper describes the design of a real-time audiorange digital oscilloscope and its implementation in 90nm CMOS FPGA platform. The design consists of sample and hold circuits, A/D conversion, audio and video processing, on-chip RAM, clock generation and control logic. The design of internal blocks and modules in 90nm devices in an FPGA is elaborated. Also the key features and their implementation algorithms are presented. Finally, the timing waveforms and simulation results are put forward.Keywords: CMOS, VLSI, Oscilloscope, Field Programmable Gate Array (FPGA), VHDL, Video Graphics Array (VGA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30821866 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16141865 Content and Resources based Mobile and Wireless Video Transcoding
Authors: Ashraf M. A. Ahmad
Abstract:
Delivering streaming video over wireless is an important component of many interactive multimedia applications running on personal wireless handset devices. Such personal devices have to be inexpensive, compact, and lightweight. But wireless channels have a high channel bit error rate and limited bandwidth. Delay variation of packets due to network congestion and the high bit error rate greatly degrades the quality of video at the handheld device. Therefore, mobile access to multimedia contents requires video transcoding functionality at the edge of the mobile network for interworking with heterogeneous networks and services. Therefore, to guarantee quality of service (QoS) delivered to the mobile user, a robust and efficient transcoding scheme should be deployed in mobile multimedia transporting network. Hence, this paper examines the challenges and limitations that the video transcoding schemes in mobile multimedia transporting network face. Then handheld resources, network conditions and content based mobile and wireless video transcoding is proposed to provide high QoS applications. Exceptional performance is demonstrated in the experiment results. These experiments were designed to verify and prove the robustness of the proposed approach. Extensive experiments have been conducted, and the results of various video clips with different bit rate and frame rate have been provided.Keywords: Content, Object detection, Transcoding, Texture, Temporal, Video.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13461864 Using PFA in Feature Analysis and Selection for H.264 Adaptation
Authors: Nora A. Naguib, Ahmed E. Hussein, Hesham A. Keshk, Mohamed I. El-Adawy
Abstract:
Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
Keywords: Adaptation, feature selection, H.264, Principal Feature Analysis (PFA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16061863 Video Quality Control Using a ROI and Two- Component Weighted Metrics
Authors: Petra Heribanová, Jaroslav Polec, Michal Martinovič
Abstract:
In this paper we propose a new content-weighted method for full reference (FR) video quality control using a region of interest (ROI) and wherein two-component weighted metrics for Deaf People Video Communication. In our approach, an image is partitioned into region of interest and into region "dry-as-dust", then region of interest is partitioned into two parts: edges and background (smooth regions), while the another methods (metrics) combined and weighted three or more parts as edges, edges errors, texture, smooth regions, blur, block distance etc. as we proposed. Using another idea that different image regions from deaf people video communication have different perceptual significance relative to quality. Intensity edges certainly contain considerable image information and are perceptually significant.
Keywords: Video quality assessment, weighted MSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19811862 Spatio-Temporal Video Slice Edges Analysis for Shot Transition Detection and Classification
Authors: Aissa Saoudi, Hassane Essafi
Abstract:
In this work we will present a new approach for shot transition auto-detection. Our approach is based on the analysis of Spatio-Temporal Video Slice (STVS) edges extracted from videos. The proposed approach is capable to efficiently detect both abrupt shot transitions 'cuts' and gradual ones such as fade-in, fade-out and dissolve. Compared to other techniques, our method is distinguished by its high level of precision and speed. Those performances are obtained due to minimizing the problem of the boundary shot detection to a simple 2D image partitioning problem.Keywords: Boundary shot detection, Shot transition detection, Video analysis, Video indexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16371861 Dynamic Visualization on Student's Performance, Retention and Transfer of Procedural Learning
Authors: Fauzy M. Wan, Reem S.A. Baragash
Abstract:
This study examined the effects of two dynamic visualizations on 60 Malaysian primary school student-s performance (time on task), retention and transference. The independent variables in this study were the two dynamic visualizations, the video and the animated instructions. The dependent variables were the gain score of performance, retention and transference. The results showed that the students in the animation group significantly outperformed the students in the video group in retention. There were no significant differences in terms of gain scores in the performance and transference among the animation and the video groups, although the scores were slightly higher in the animation group compared to the video group. The conclusion of this study is that the animation visualization is superior compared to the video in the retention for a procedural task.Keywords: Dynamic visualization, Procedural Task, Retention, Transference
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14281860 Smartphone Video Source Identification Based on Sensor Pattern Noise
Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba
Abstract:
An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.Keywords: Digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11971859 Shot Transition Detection with Minimal Decoding of MPEG Video Streams
Authors: Mona A. Fouad, Fatma M. Bayoumi, Hoda M. Onsi, Mohamed G. Darwish
Abstract:
Digital libraries become more and more necessary in order to support users with powerful and easy-to-use tools for searching, browsing and retrieving media information. The starting point for these tasks is the segmentation of video content into shots. To segment MPEG video streams into shots, a fully automatic procedure to detect both abrupt and gradual transitions (dissolve and fade-groups) with minimal decoding in real time is developed in this study. Each was explored through two phases: macro-block type's analysis in B-frames, and on-demand intensity information analysis. The experimental results show remarkable performance in detecting gradual transitions of some kinds of input data and comparable results of the rest of the examined video streams. Almost all abrupt transitions could be detected with very few false positive alarms.Keywords: Adaptive threshold, abrupt transitions, gradual transitions, MPEG video streams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15571858 Video Coding Algorithm for Video Sequences with Abrupt Luminance Change
Authors: Sang Hyun Kim
Abstract:
In this paper, a fast motion compensation algorithm is proposed that improves coding efficiency for video sequences with brightness variations. We also propose a cross entropy measure between histograms of two frames to detect brightness variations. The framewise brightness variation parameters, a multiplier and an offset field for image intensity, are estimated and compensated. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) compared with the conventional method, with a greatly reduced computational load, when the video scene contains illumination changes.Keywords: Motion estimation, Fast motion compensation, Brightness variation compensation, Brightness change detection, Cross entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17641857 The Content Based Objective Metrics for Video Quality Evaluation
Authors: Michal Mardiak, Jaroslav Polec
Abstract:
In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.
Keywords: Objective quality metrics, mutual information, region recognition, content based metrics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15061856 Stego Machine – Video Steganography using Modified LSB Algorithm
Authors: Mritha Ramalingam
Abstract:
Computer technology and the Internet have made a breakthrough in the existence of data communication. This has opened a whole new way of implementing steganography to ensure secure data transfer. Steganography is the fine art of hiding the information. Hiding the message in the carrier file enables the deniability of the existence of any message at all. This paper designs a stego machine to develop a steganographic application to hide data containing text in a computer video file and to retrieve the hidden information. This can be designed by embedding text file in a video file in such away that the video does not loose its functionality using Least Significant Bit (LSB) modification method. This method applies imperceptible modifications. This proposed method strives for high security to an eavesdropper-s inability to detect hidden information.Keywords: Data hiding, LSB, Stego machine, VideoSteganography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42681855 Multiplayer Game System for Therapeutic Exercise in Which Players with Different Athletic Abilities Can Participate on an Even Competitive Footing
Authors: Kazumoto Tanaka, Takayuki Fujino
Abstract:
Sports games conducted as a group are a form of therapeutic exercise for aged people with decreased strength and for people suffering from permanent damage of stroke and other conditions. However, it is difficult for patients with different athletic abilities to play a game on an equal footing. This study specifically examines a computer video game designed for therapeutic exercise, and a game system with support given depending on athletic ability. Thereby, anyone playing the game can participate equally. This video-game, to be specific, is a popular variant of balloon volleyball, in which players hit a balloon by hand before it falls to the floor. In this game system, each player plays the game watching a monitor on which the system displays tailor-made video-game images adjusted to the person’s athletic ability, providing players with player-adaptive assist support. We have developed a multiplayer game system with an image generation technique for the tailor-made video-game and conducted tests to evaluate it.
Keywords: Therapeutic exercise, computer video game, disability-adaptive assist, tailor-made video-game image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21071854 Human Behavior Modeling in Video Surveillance of Conference Halls
Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini
Abstract:
In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.Keywords: Activity modeling, clustering, PLSA, video representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8421853 Effective Relay Communication for Scalable Video Transmission
Authors: Jung Ah Park, Zhijie Zhao, Doug Young Suh, Joern Ostermann
Abstract:
In this paper, we propose an effective relay communication for layered video transmission as an alternative to make the most of limited resources in a wireless communication network where loss often occurs. Relaying brings stable multimedia services to end clients, compared to multiple description coding (MDC). Also, retransmission of only parity data about one or more video layer using channel coder to the end client of the relay device is paramount to the robustness of the loss situation. Using these methods in resource-constrained environments, such as real-time user created content (UCC) with layered video transmission, can provide high-quality services even in a poor communication environment. Minimal services are also possible. The mathematical analysis shows that the proposed method reduced the probability of GOP loss rate compared to MDC and raptor code without relay. The GOP loss rate is about zero, while MDC and raptor code without relay have a GOP loss rate of 36% and 70% in case of 10% frame loss rate.Keywords: Relay communication, Multiple Description Coding, Scalable Video Coding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14361852 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern
Authors: S. Sowmyayani, P. Arockia Jansi Rani
Abstract:
This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17311851 Analyzing Transformation of 1D-Functions for Frequency Domain based Video Classification
Authors: Kahraman Ayyildiz, Stefan Conrad
Abstract:
In this paper we illuminate a frequency domain based classification method for video scenes. Videos from certain topical areas often contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Assessing main and side frequencies of each repeating movement gives rise to the motion type. We obtain the frequency domain by transforming spatio-temporal motion trajectories. Further on we explain how to compute frequency features for video clips and how to use them for classifying. The focus of the experimental phase is on transforms utilized for our system. By comparing various transforms, experiments show the optimal transform for a motion frequency based approach.Keywords: action recognition, frequency, transform, motion recognition, repeating movement, video classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16931850 Video Quality assessment Measure with a Neural Network
Authors: H. El Khattabi, A. Tamtaoui, D. Aboutajdine
Abstract:
In this paper, we present the video quality measure estimation via a neural network. This latter predicts MOS (mean opinion score) by providing height parameters extracted from original and coded videos. The eight parameters that are used are: the average of DFT differences, the standard deviation of DFT differences, the average of DCT differences, the standard deviation of DCT differences, the variance of energy of color, the luminance Y, the chrominance U and the chrominance V. We chose Euclidean Distance to make comparison between the calculated and estimated output.Keywords: video, neural network MLP, subjective quality, DCT, DFT, Retropropagation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18051849 Improving Packet Latency of Video Sensor Networks
Authors: Arijit Ghosh, Tony Givargis
Abstract:
Video sensor networks operate on stringent requirements of latency. Packets have a deadline within which they have to be delivered. Violation of the deadline causes a packet to be treated as lost and the loss of packets ultimately affects the quality of the application. Network latency is typically a function of many interacting components. In this paper, we propose ways of reducing the forwarding latency of a packet at intermediate nodes. The forwarding latency is caused by a combination of processing delay and queueing delay. The former is incurred in order to determine the next hop in dynamic routing. We show that unless link failures in a very specific and unlikely pattern, a vast majority of these lookups are redundant. To counter this we propose source routing as the routing strategy. However, source routing suffers from issues related to scalability and being impervious to network dynamics. We propose solutions to counter these and show that source routing is definitely a viable option in practical sized video networks. We also propose a fast and fair packet scheduling algorithm that reduces queueing delay at the nodes. We support our claims through extensive simulation on realistic topologies with practical traffic loads and failure patterns.Keywords: Sensor networks, Packet latency, Network design, Networkperformance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15561848 ADABeV: Automatic Detection of Abnormal Behavior in Video-surveillance
Authors: Nour Charara, Iman Jarkass, Maria Sokhn, Elena Mugellini, Omar Abou Khaled
Abstract:
Intelligent Video-Surveillance (IVS) systems are being more and more popular in security applications. The analysis and recognition of abnormal behaviours in a video sequence has gradually drawn the attention in the field of IVS, since it allows filtering out a large number of useless information, which guarantees the high efficiency in the security protection, and save a lot of human and material resources. We present in this paper ADABeV, an intelligent video-surveillance framework for event recognition in crowded scene to detect the abnormal human behaviour. This framework is attended to be able to achieve real-time alarming, reducing the lags in traditional monitoring systems. This architecture proposal addresses four main challenges: behaviour understanding in crowded scenes, hard lighting conditions, multiple input kinds of sensors and contextual-based adaptability to recognize the active context of the scene.Keywords: Behavior recognition, Crowded scene, Data fusion, Pattern recognition, Video-surveillance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36341847 People Counting in Transport Vehicles
Authors: Sebastien Harasse, Laurent Bonnaud, Michel Desvignes
Abstract:
Counting people from a video stream in a noisy environment is a challenging task. This project aims at developing a counting system for transport vehicles, integrated in a video surveillance product. This article presents a method for the detection and tracking of multiple faces in a video by using a model of first and second order local moments. An iterative process is used to estimate the position and shape of multiple faces in images, and to track them. the trajectories are then processed to count people entering and leaving the vehicle.
Keywords: face detection, tracking, counting, local statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17641846 Viral Advertising: Popularity and Willingness to Share among the Czech Internet Population
Authors: Martin Klepek
Abstract:
This paper presents results of primary quantitative research on viral advertising with focus on popularity and willingness to share viral video among Czech Internet population. It starts with brief theoretical debate on viral advertising, which is used for the comparison of the results. For purpose of collecting data, online questionnaire survey was given to 384 respondents. Statistics utilized in this research included frequency, percentage, correlation and Pearson’s Chi-square test. Data was evaluated using SPSS software. The research analysis disclosed high popularity of viral advertising video among Czech Internet population but implies lower willingness to share it. Significant relationship between likability of viral video technique and age of the viewer was found.
Keywords: Internet advertising, Internet population, promotion, marketing communication, viral advertising, viral video.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20171845 Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video
Authors: J. P. Dubois
Abstract:
Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.Keywords: AR, ATM, burstiness, doubly stochastic, statisticalmultiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14071844 Video-Based System for Support of Robot-Enhanced Gait Rehabilitation of Stroke Patients
Authors: Matjaž Divjak, Simon Zelič, Aleš Holobar
Abstract:
We present a dedicated video-based monitoring system for quantification of patient’s attention to visual feedback during robot assisted gait rehabilitation. Two different approaches for eye gaze and head pose tracking are tested and compared. Several metrics for assessment of patient’s attention are also presented. Experimental results with healthy volunteers demonstrate that unobtrusive video-based gaze tracking during the robot-assisted gait rehabilitation is possible and is sufficiently robust for quantification of patient’s attention and assessment of compliance with the rehabilitation therapy.
Keywords: Video-based attention monitoring, gaze estimation, stroke rehabilitation, user compliance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17811843 Dynamic Data Partition Algorithm for a Parallel H.264 Encoder
Authors: Juntae Kim, Jaeyoung Park, Kyoungkun Lee, Jong Tae Kim
Abstract:
The H.264/AVC standard is a highly efficient video codec providing high-quality videos at low bit-rates. As employing advanced techniques, the computational complexity has been increased. The complexity brings about the major problem in the implementation of a real-time encoder and decoder. Parallelism is the one of approaches which can be implemented by multi-core system. We analyze macroblock-level parallelism which ensures the same bit rate with high concurrency of processors. In order to reduce the encoding time, dynamic data partition based on macroblock region is proposed. The data partition has the advantages in load balancing and data communication overhead. Using the data partition, the encoder obtains more than 3.59x speed-up on a four-processor system. This work can be applied to other multimedia processing applications.Keywords: H.264/AVC, video coding, thread-level parallelism, OpenMP, multimedia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17941842 Intelligibility of Cued Speech in Video
Authors: P. Heribanová, J. Polec, S. Ondrušová, M. Hosťovecký
Abstract:
This paper discusses the cued speech recognition methods in videoconference. Cued speech is a specific gesture language that is used for communication between deaf people. We define the criteria for sentence intelligibility according to answers of testing subjects (deaf people). In our tests we use 30 sample videos coded by H.264 codec with various bit-rates and various speed of cued speech. Additionally, we define the criteria for consonant sign recognizability in single-handed finger alphabet (dactyl) analogically to acoustics. We use another 12 sample videos coded by H.264 codec with various bit-rates in four different video formats. To interpret the results we apply the standard scale for subjective video quality evaluation and the percentual evaluation of intelligibility as in acoustics. From the results we construct the minimum coded bit-rate recommendations for every spatial resolution.Keywords: cued speech, inteligibility, logatom, video
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530