Search results for: Stereoscopic video
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 408

Search results for: Stereoscopic video

348 Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision

Authors: Dilip K. Prasad, C. Krishna Prasath, Deepu Rajan, Lily Rachmawati, Eshan Rajabally, Chai Quek

Abstract:

This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here.

Keywords: Autonomous maritime vehicle, object detection, situation awareness, tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
347 Dynamic Visualization on Student's Performance, Retention and Transfer of Procedural Learning

Authors: Fauzy M. Wan, Reem S.A. Baragash

Abstract:

This study examined the effects of two dynamic visualizations on 60 Malaysian primary school student-s performance (time on task), retention and transference. The independent variables in this study were the two dynamic visualizations, the video and the animated instructions. The dependent variables were the gain score of performance, retention and transference. The results showed that the students in the animation group significantly outperformed the students in the video group in retention. There were no significant differences in terms of gain scores in the performance and transference among the animation and the video groups, although the scores were slightly higher in the animation group compared to the video group. The conclusion of this study is that the animation visualization is superior compared to the video in the retention for a procedural task.

Keywords: Dynamic visualization, Procedural Task, Retention, Transference

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
346 Smartphone Video Source Identification Based on Sensor Pattern Noise

Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.

Keywords: Digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1196
345 Shot Transition Detection with Minimal Decoding of MPEG Video Streams

Authors: Mona A. Fouad, Fatma M. Bayoumi, Hoda M. Onsi, Mohamed G. Darwish

Abstract:

Digital libraries become more and more necessary in order to support users with powerful and easy-to-use tools for searching, browsing and retrieving media information. The starting point for these tasks is the segmentation of video content into shots. To segment MPEG video streams into shots, a fully automatic procedure to detect both abrupt and gradual transitions (dissolve and fade-groups) with minimal decoding in real time is developed in this study. Each was explored through two phases: macro-block type's analysis in B-frames, and on-demand intensity information analysis. The experimental results show remarkable performance in detecting gradual transitions of some kinds of input data and comparable results of the rest of the examined video streams. Almost all abrupt transitions could be detected with very few false positive alarms.

Keywords: Adaptive threshold, abrupt transitions, gradual transitions, MPEG video streams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
344 Evaluation of Cognitive Benefits among Differently Abled Subjects with Video Game as Intervention

Authors: H. Nagendra, Vinod Kumar, S. Mukherjee

Abstract:

In this study, the potential benefits of playing action video game among congenitally deaf and dumb subjects is reported in terms of EEG ratio indices. The frontal and occipital lobes are associated with development of motor skills, cognition, and visual information processing and color recognition. The sixteen hours of First-Person shooter action video game play resulted in the increase of the ratios β/(α+θ) and β/θ in frontal and occipital lobes. This can be attributed to the enhancement of certain aspect of cognition among deaf and dumb subjects.

Keywords: Cognitive enhancement, video games, EEG band powers, Deaf and Dumb subjects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
343 A Moving Human-Object Detection for Video Access Monitoring

Authors: Won-Ho Kim, Nuwan Sanjeewa Rajasooriya

Abstract:

In this paper, a simple moving human detection method is proposed for video surveillance system or access monitoring system. The frame difference and noise threshold are used for initial detection of a moving human-object, and simple labeling method is applied for final human-object segmentation. The simulated results show that the applied algorithm is fast to detect the moving human-objects by performing 95% of correct detection rate. The proposed algorithm has confirmed that can be used as an intelligent video access monitoring system.

Keywords: Moving human-object detection, Video access monitoring, Image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
342 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Thus, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Edge detection is one of the basic building blocks of video and image processing applications. It is a common block in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: High Level Synthesis, Canny edge detection, Hardware accelerators, and Computer Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5430
341 Video Coding Algorithm for Video Sequences with Abrupt Luminance Change

Authors: Sang Hyun Kim

Abstract:

In this paper, a fast motion compensation algorithm is proposed that improves coding efficiency for video sequences with brightness variations. We also propose a cross entropy measure between histograms of two frames to detect brightness variations. The framewise brightness variation parameters, a multiplier and an offset field for image intensity, are estimated and compensated. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) compared with the conventional method, with a greatly reduced computational load, when the video scene contains illumination changes.

Keywords: Motion estimation, Fast motion compensation, Brightness variation compensation, Brightness change detection, Cross entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
340 The Content Based Objective Metrics for Video Quality Evaluation

Authors: Michal Mardiak, Jaroslav Polec

Abstract:

In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.

Keywords: Objective quality metrics, mutual information, region recognition, content based metrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
339 Stego Machine – Video Steganography using Modified LSB Algorithm

Authors: Mritha Ramalingam

Abstract:

Computer technology and the Internet have made a breakthrough in the existence of data communication. This has opened a whole new way of implementing steganography to ensure secure data transfer. Steganography is the fine art of hiding the information. Hiding the message in the carrier file enables the deniability of the existence of any message at all. This paper designs a stego machine to develop a steganographic application to hide data containing text in a computer video file and to retrieve the hidden information. This can be designed by embedding text file in a video file in such away that the video does not loose its functionality using Least Significant Bit (LSB) modification method. This method applies imperceptible modifications. This proposed method strives for high security to an eavesdropper-s inability to detect hidden information.

Keywords: Data hiding, LSB, Stego machine, VideoSteganography

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4268
338 Multiplayer Game System for Therapeutic Exercise in Which Players with Different Athletic Abilities Can Participate on an Even Competitive Footing

Authors: Kazumoto Tanaka, Takayuki Fujino

Abstract:

Sports games conducted as a group are a form of therapeutic exercise for aged people with decreased strength and for people suffering from permanent damage of stroke and other conditions. However, it is difficult for patients with different athletic abilities to play a game on an equal footing. This study specifically examines a computer video game designed for therapeutic exercise, and a game system with support given depending on athletic ability. Thereby, anyone playing the game can participate equally. This video-game, to be specific, is a popular variant of balloon volleyball, in which players hit a balloon by hand before it falls to the floor. In this game system, each player plays the game watching a monitor on which the system displays tailor-made video-game images adjusted to the person’s athletic ability, providing players with player-adaptive assist support. We have developed a multiplayer game system with an image generation technique for the tailor-made video-game and conducted tests to evaluate it.

Keywords: Therapeutic exercise, computer video game, disability-adaptive assist, tailor-made video-game image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
337 Human Behavior Modeling in Video Surveillance of Conference Halls

Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini

Abstract:

In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.

Keywords: Activity modeling, clustering, PLSA, video representation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
336 Effective Relay Communication for Scalable Video Transmission

Authors: Jung Ah Park, Zhijie Zhao, Doug Young Suh, Joern Ostermann

Abstract:

In this paper, we propose an effective relay communication for layered video transmission as an alternative to make the most of limited resources in a wireless communication network where loss often occurs. Relaying brings stable multimedia services to end clients, compared to multiple description coding (MDC). Also, retransmission of only parity data about one or more video layer using channel coder to the end client of the relay device is paramount to the robustness of the loss situation. Using these methods in resource-constrained environments, such as real-time user created content (UCC) with layered video transmission, can provide high-quality services even in a poor communication environment. Minimal services are also possible. The mathematical analysis shows that the proposed method reduced the probability of GOP loss rate compared to MDC and raptor code without relay. The GOP loss rate is about zero, while MDC and raptor code without relay have a GOP loss rate of 36% and 70% in case of 10% frame loss rate.

Keywords: Relay communication, Multiple Description Coding, Scalable Video Coding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
335 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
334 Analyzing Transformation of 1D-Functions for Frequency Domain based Video Classification

Authors: Kahraman Ayyildiz, Stefan Conrad

Abstract:

In this paper we illuminate a frequency domain based classification method for video scenes. Videos from certain topical areas often contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Assessing main and side frequencies of each repeating movement gives rise to the motion type. We obtain the frequency domain by transforming spatio-temporal motion trajectories. Further on we explain how to compute frequency features for video clips and how to use them for classifying. The focus of the experimental phase is on transforms utilized for our system. By comparing various transforms, experiments show the optimal transform for a motion frequency based approach.

Keywords: action recognition, frequency, transform, motion recognition, repeating movement, video classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
333 Video Quality assessment Measure with a Neural Network

Authors: H. El Khattabi, A. Tamtaoui, D. Aboutajdine

Abstract:

In this paper, we present the video quality measure estimation via a neural network. This latter predicts MOS (mean opinion score) by providing height parameters extracted from original and coded videos. The eight parameters that are used are: the average of DFT differences, the standard deviation of DFT differences, the average of DCT differences, the standard deviation of DCT differences, the variance of energy of color, the luminance Y, the chrominance U and the chrominance V. We chose Euclidean Distance to make comparison between the calculated and estimated output.

Keywords: video, neural network MLP, subjective quality, DCT, DFT, Retropropagation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
332 ADABeV: Automatic Detection of Abnormal Behavior in Video-surveillance

Authors: Nour Charara, Iman Jarkass, Maria Sokhn, Elena Mugellini, Omar Abou Khaled

Abstract:

Intelligent Video-Surveillance (IVS) systems are being more and more popular in security applications. The analysis and recognition of abnormal behaviours in a video sequence has gradually drawn the attention in the field of IVS, since it allows filtering out a large number of useless information, which guarantees the high efficiency in the security protection, and save a lot of human and material resources. We present in this paper ADABeV, an intelligent video-surveillance framework for event recognition in crowded scene to detect the abnormal human behaviour. This framework is attended to be able to achieve real-time alarming, reducing the lags in traditional monitoring systems. This architecture proposal addresses four main challenges: behaviour understanding in crowded scenes, hard lighting conditions, multiple input kinds of sensors and contextual-based adaptability to recognize the active context of the scene.

Keywords: Behavior recognition, Crowded scene, Data fusion, Pattern recognition, Video-surveillance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3633
331 People Counting in Transport Vehicles

Authors: Sebastien Harasse, Laurent Bonnaud, Michel Desvignes

Abstract:

Counting people from a video stream in a noisy environment is a challenging task. This project aims at developing a counting system for transport vehicles, integrated in a video surveillance product. This article presents a method for the detection and tracking of multiple faces in a video by using a model of first and second order local moments. An iterative process is used to estimate the position and shape of multiple faces in images, and to track them. the trajectories are then processed to count people entering and leaving the vehicle.

Keywords: face detection, tracking, counting, local statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
330 Viral Advertising: Popularity and Willingness to Share among the Czech Internet Population

Authors: Martin Klepek

Abstract:

This paper presents results of primary quantitative research on viral advertising with focus on popularity and willingness to share viral video among Czech Internet population. It starts with brief theoretical debate on viral advertising, which is used for the comparison of the results. For purpose of collecting data, online questionnaire survey was given to 384 respondents. Statistics utilized in this research included frequency, percentage, correlation and Pearson’s Chi-square test. Data was evaluated using SPSS software. The research analysis disclosed high popularity of viral advertising video among Czech Internet population but implies lower willingness to share it. Significant relationship between likability of viral video technique and age of the viewer was found.

Keywords: Internet advertising, Internet population, promotion, marketing communication, viral advertising, viral video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
329 Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video

Authors: J. P. Dubois

Abstract:

Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.

Keywords: AR, ATM, burstiness, doubly stochastic, statisticalmultiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
328 Video-Based System for Support of Robot-Enhanced Gait Rehabilitation of Stroke Patients

Authors: Matjaž Divjak, Simon Zelič, Aleš Holobar

Abstract:

We present a dedicated video-based monitoring system for quantification of patient’s attention to visual feedback during robot assisted gait rehabilitation. Two different approaches for eye gaze and head pose tracking are tested and compared. Several metrics for assessment of patient’s attention are also presented. Experimental results with healthy volunteers demonstrate that unobtrusive video-based gaze tracking during the robot-assisted gait rehabilitation is possible and is sufficiently robust for quantification of patient’s attention and assessment of compliance with the rehabilitation therapy.

Keywords: Video-based attention monitoring, gaze estimation, stroke rehabilitation, user compliance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
327 Intelligibility of Cued Speech in Video

Authors: P. Heribanová, J. Polec, S. Ondrušová, M. Hosťovecký

Abstract:

This paper discusses the cued speech recognition methods in videoconference. Cued speech is a specific gesture language that is used for communication between deaf people. We define the criteria for sentence intelligibility according to answers of testing subjects (deaf people). In our tests we use 30 sample videos coded by H.264 codec with various bit-rates and various speed of cued speech. Additionally, we define the criteria for consonant sign recognizability in single-handed finger alphabet (dactyl) analogically to acoustics. We use another 12 sample videos coded by H.264 codec with various bit-rates in four different video formats. To interpret the results we apply the standard scale for subjective video quality evaluation and the percentual evaluation of intelligibility as in acoustics. From the results we construct the minimum coded bit-rate recommendations for every spatial resolution.

Keywords: cued speech, inteligibility, logatom, video

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
326 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: Objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
325 Video Matting based on Background Estimation

Authors: J.-H. Moon, D.-O Kim, R.-H. Park

Abstract:

This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.

Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
324 Adaptive Multiple Transforms Hardware Architecture for Versatile Video Coding

Authors: T. Damak, S. Houidi, M. A. Ben Ayed, N. Masmoudi

Abstract:

The Versatile Video Coding standard (VVC) is actually under development by the Joint Video Exploration Team (or JVET). An Adaptive Multiple Transforms (AMT) approach was announced. It is based on different transform modules that provided an efficient coding. However, the AMT solution raises several issues especially regarding the complexity of the selected set of transforms. This can be an important issue, particularly for a future industrial adoption. This paper proposed an efficient hardware implementation of the most used transform in AMT approach: the DCT II. The developed circuit is adapted to different block sizes and can reach a minimum frequency of 192 MHz allowing an optimized execution time.

Keywords: AMT, DCT II, hardware, transform, VVC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 580
323 Impact of Fixation Time on Subjective Video Quality Metric: a New Proposal for Lossy Compression Impairment Assessment

Authors: M. G. Albanesi, R. Amadeo

Abstract:

In this paper, a new approach for quality assessment tasks in lossy compressed digital video is proposed. The research activity is based on the visual fixation data recorded by an eye tracker. The method involved both a new paradigm for subjective quality evaluation and the subsequent statistical analysis to match subjective scores provided by the observer to the data obtained from the eye tracker experiments. The study brings improvements to the state of the art, as it solves some problems highlighted in literature. The experiments prove that data obtained from an eye tracker can be used to classify videos according to the level of impairment due to compression. The paper presents the methodology, the experimental results and their interpretation. Conclusions suggest that the eye tracker can be useful in quality assessment, if data are collected and analyzed in a proper way.

Keywords: eye tracker, video compression, video qualityassessment, visual attention

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
322 Evaluation of Droplet Sizes from Video Images for Metal Working Fluids

Authors: R. Hacıoğlu, A. Genç, B. Bakırcı

Abstract:

Metal working fluids were used in the preparation of oil in water emulsions. The size of oil droplets were evaluated by using the analysis of video images taken from the zeta potential measurements. The evaluated size distributions for emulsions were also tested by microscopic analysis. In addition, emulsion stabilities were discussed depending on electrolyte concentration and pH. The results showed that the stability of oil emulsions was strongly related to pH and the concentration of CaCl2. However, the same dependency was not observed for NaCl.

Keywords: Droplet size distribution, emulsion stability, o/w emulsions, video images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840
321 Application of Tacit Knowledge from Professional Packaging Designer for Teaching Packaging Design

Authors: Somsri Binraman, Boonliang Kaewnapan, Krittika Tanprasert

Abstract:

In the package design industry, there are a lot of tacit knowledge resided within each designer. The objectives are to capture them and compile it to be used as a teaching resource and to create a video clip of package design process as well as to evaluate its quality and learning effectiveness. Interview were used as a technique for capturing knowledge in brand design concept, differentiation, recognition, rank of recognition factor, consumer survey, knowledge about marketing, research, graphic design, the effect of color, and law and regulation. Video clip about package design were created. The clip consisted of both the speech and clip of actual process. The quality of the video in term of media was ranked as good while the content was ranked as excellent. The students- score on post-test was significantly greater than that of pretest (p>0.001).

Keywords: Tacit knowledge, interview, video, packaging, design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
320 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: Motion estimation, test zone search, high efficiency video coding, processing element, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
319 A Wavelet Based Object Watermarking System for Image and Video

Authors: Abdessamad Essaouabi, Ibnelhaj Elhassane

Abstract:

Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, objectbased watermarking schemes are needed for copyright protection. This paper proposes a novel blind object watermarking scheme for images and video using the in place lifting shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy image/video compression (e.g. JPEG, JPEG2000 and MPEG-4), scaling, adding noise, filtering, etc.

Keywords: Watermark, visual model, robustness, in place lifting shape adaptive-discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898