Search results for: video quality assessment (VQA)
15204 Subjective Quality Assessment for Impaired Videos with Varying Spatial and Temporal Information
Authors: Muhammad Rehan Usman, Muhammad Arslan Usman, Soo Young Shin
Abstract:
The new era of digital communication has brought up many challenges that network operators need to overcome. The high demand of mobile data rates require improved networks, which is a challenge for the operators in terms of maintaining the quality of experience (QoE) for their consumers. In live video transmission, there is a sheer need for live surveillance of the videos in order to maintain the quality of the network. For this purpose objective algorithms are employed to monitor the quality of the videos that are transmitted over a network. In order to test these objective algorithms, subjective quality assessment of the streamed videos is required, as the human eye is the best source of perceptual assessment. In this paper we have conducted subjective evaluation of videos with varying spatial and temporal impairments. These videos were impaired with frame freezing distortions so that the impact of frame freezing on the quality of experience could be studied. We present subjective Mean Opinion Score (MOS) for these videos that can be used for fine tuning the objective algorithms for video quality assessment.Keywords: frame freezing, mean opinion score, objective assessment, subjective evaluation
Procedia PDF Downloads 49515203 Video Stabilization Using Feature Point Matching
Authors: Shamsundar Kulkarni
Abstract:
Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.Keywords: video stabilization, point feature matching, salient points, image quality measurement
Procedia PDF Downloads 31415202 Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content
Authors: Maryam Azimi, Amin Banitalebi-Dehkordi, Yuanyuan Dong, Mahsa T. Pourazad, Panos Nasiopoulos
Abstract:
While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions.Keywords: HDR, dynamic range, LDR, subjective evaluation, video compression, HEVC, video quality metrics
Procedia PDF Downloads 52915201 A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment
Authors: M. Prema Kumar, P. Rajesh Kumar
Abstract:
The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy.Keywords: multi sensor image fusion, MSVD, image processing, monochrome video
Procedia PDF Downloads 57315200 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 60215199 Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265
Authors: S. Sarumathi, C. Deepadharani, Garimella Archana, S. Dakshayani, D. Logeshwaran, D. Jayakumar, Vijayarangan Natarajan
Abstract:
The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming.Keywords: access HD video, H. 265 video standard, high performance, high quality image, low bandwidth, new video format, video streaming applications
Procedia PDF Downloads 35515198 H.263 Based Video Transceiver for Wireless Camera System
Authors: Won-Ho Kim
Abstract:
In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.Keywords: wireless video transceiver, video surveillance camera, H.263 video encoding digital signal processing
Procedia PDF Downloads 36715197 Extraction of Text Subtitles in Multimedia Systems
Authors: Amarjit Singh
Abstract:
In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos.Keywords: video, subtitles, extraction, annotation, frames
Procedia PDF Downloads 60315196 Video Summarization: Techniques and Applications
Authors: Zaynab El Khattabi, Youness Tabii, Abdelhamid Benkaddour
Abstract:
Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research.Keywords: video summarization, static summarization, video skimming, semantic features
Procedia PDF Downloads 40415195 Video-Based System for Support of Robot-Enhanced Gait Rehabilitation of Stroke Patients
Authors: Matjaž Divjak, Simon Zelič, Aleš Holobar
Abstract:
We present a dedicated video-based monitoring system for quantification of patient’s attention to visual feedback during robot assisted gait rehabilitation. Two different approaches for eye gaze and head pose tracking are tested and compared. Several metrics for assessment of patient’s attention are also presented. Experimental results with healthy volunteers demonstrate that unobtrusive video-based gaze tracking during the robot-assisted gait rehabilitation is possible and is sufficiently robust for quantification of patient’s attention and assessment of compliance with the rehabilitation therapy.Keywords: video-based attention monitoring, gaze estimation, stroke rehabilitation, user compliance
Procedia PDF Downloads 42615194 Evaluation of Video Development about Exclusive Breastfeeding as a Nutrition Education Media for Posyandu Cadre
Authors: Ari Istiany, Guspri Devi Artanti, M. Si
Abstract:
Based on the results Riskesdas, it is known that breastfeeding awareness about the importance of exclusive breastfeeding is still low at only 15.3 %. These conditions resulted in a very infant at risk for infectious diseases, such as diarrhea and acute respiratory infection. Therefore, the aim of this study to evaluate the video development about exclusive breastfeeding as a nutrition education media for posyandu cadre. This research used development methods for making the video about exclusive breastfeeding. The study was conducted in urban areas Rawamangun, East Jakarta. Respondents of this study were 1 media experts from the Department of Educational Technology - UNJ, 2 subject matter experts from Department of Home Economics - UNJ and 20 posyandu cadres to assess the quality of the video. Aspects assessed include the legibility of text, image display quality, color composition, clarity of sound, music appropriateness, duration, suitability of the material and language. Data were analyzed descriptively likes frequency distribution table, the average value, and deviation standard. The result of this study showed that the average score assessment according to media experts, subject matter experts, and posyandu cadres respectively was 3.43 ± 0.51 (good), 4.37 ± 0.52 (very good) and 3.6 ± 0.73 (good). The conclusion is on exclusive breastfeeding video as feasible as a media for nutrition education. While suggestions for the improvement of visual media is multiply illustrations, add material about the correct way of breastfeeding and healthy baby pictures.Keywords: exclusive breastfeeding, posyandu cadre, video, nutrition education
Procedia PDF Downloads 41215193 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network
Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba
Abstract:
Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network
Procedia PDF Downloads 23415192 Anonymous Editing Prevention Technique Using Gradient Method for High-Quality Video
Authors: Jiwon Lee, Chanho Jung, Si-Hwan Jang, Kyung-Ill Kim, Sanghyun Joo, Wook-Ho Son
Abstract:
Since the advances in digital imaging technologies have led to development of high quality digital devices, there are a lot of illegal copies of copyrighted video content on the internet. Thus, we propose a high-quality (HQ) video watermarking scheme that can prevent these illegal copies from spreading out. The proposed scheme is applied spatial and temporal gradient methods to improve the fidelity and detection performance. Also, the scheme duplicates the watermark signal temporally to alleviate the signal reduction caused by geometric and signal-processing distortions. Experimental results show that the proposed scheme achieves better performance than previously proposed schemes and it has high fidelity. The proposed scheme can be used in broadcast monitoring or traitor tracking applications which need fast detection process to prevent illegally recorded video content from spreading out.Keywords: editing prevention technique, gradient method, luminance change, video watermarking
Procedia PDF Downloads 45715191 Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks
Authors: Chung-Nan Lee, Sheng-Wei Chu, You-Chiun Wang
Abstract:
LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method.Keywords: LTE networks, multicast, resource allocation, layered video
Procedia PDF Downloads 39015190 Factorial Design Analysis for Quality of Video on MANET
Authors: Hyoup-Sang Yoon
Abstract:
The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyse MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric.Keywords: evalvid, full factorial design, mobile ad hoc networks, ns-2
Procedia PDF Downloads 41515189 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 14915188 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 18815187 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.Keywords: distortion, metrics, performance, resolution, video quality assessment
Procedia PDF Downloads 20415186 Lecture Video Indexing and Retrieval Using Topic Keywords
Authors: B. J. Sandesh, Saurabha Jirgi, S. Vidya, Prakash Eljer, Gowri Srinivasa
Abstract:
In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users.Keywords: video indexing and retrieval, lecture videos, content based video search, multimodal indexing
Procedia PDF Downloads 25115185 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework
Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi
Abstract:
There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.Keywords: video lectures, big video data, video retrieval, hadoop
Procedia PDF Downloads 53715184 Structural Analysis on the Composition of Video Game Virtual Spaces
Authors: Qin Luofeng, Shen Siqi
Abstract:
For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture.Keywords: video game, virtual space, narrativity, social space, emotional connection
Procedia PDF Downloads 27015183 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information
Procedia PDF Downloads 26715182 An Investigation of Surface Water Quality in an Industrial Area Using Integrated Approaches
Authors: Priti Saha, Biswajit Paul
Abstract:
Rapid urbanization and industrialization has increased the pollution load in surface water bodies. However, these water bodies are major source of water for drinking, irrigation, industrial activities and fishery. Therefore, water quality assessment is paramount importance to evaluate its suitability for all these purposes. This study focus to evaluate the surface water quality of an industrial city in eastern India through integrating interdisciplinary techniques. The multi-purpose Water Quality Index (WQI) assess the suitability for drinking, irrigation as well as fishery of forty-eight sampling locations, where 8.33% have excellent water quality (WQI:0-25) for fishery and 10.42%, 20.83% and 45.83% have good quality (WQI:25-50), which represents its suitability for drinking irrigation and fishery respectively. However, the industrial water quality was assessed through Ryznar Stability Index (LSI), which affirmed that only 6.25% of sampling locations have neither corrosive nor scale forming properties (RSI: 6.2-6.8). Integration of these statistical analysis with geographical information system (GIS) helps in spatial assessment. It identifies of the regions where the water quality is suitable for its use in drinking, irrigation, fishery as well as industrial activities. This research demonstrates the effectiveness of statistical and GIS techniques for water quality assessment.Keywords: surface water, water quality assessment, water quality index, spatial assessment
Procedia PDF Downloads 18215181 The Developing of Teaching Materials Online for Students in Thailand
Authors: Pitimanus Bunlue
Abstract:
The objectives of this study were to identify the unique characteristics of Salaya Old market, Phutthamonthon, Nakhon Pathom and develop the effective video media to promote the homeland awareness among local people and the characteristic features of this community were collectively summarized based on historical data, community observation, and people’s interview. The acquired data were used to develop a media describing prominent features of the community. The quality of the media was later assessed by interviewing local people in the old market in terms of content accuracy, video, and narration qualities, and sense of homeland awareness after watching the video. The result shows a 6-minute video media containing historical data and outstanding features of this community was developed. Based on the interview, the content accuracy was good. The picture quality and the narration were very good. Most people developed a sense of homeland awareness after watching the video also as well.Keywords: audio-visual, creating homeland awareness, Phutthamonthon Nakhon Pathom, research and development
Procedia PDF Downloads 29315180 Online Versus Face-To-Face – How Do Video Consultations Change The Doctor-Patient-Interaction
Authors: Markus Feufel, Friederike Kendel, Caren Hilger, Selamawit Woldai
Abstract:
Since the corona pandemic, the use of video consultation has increased remarkably. For vulnerable groups such as oncological patients, the advantages seem obvious. But how does video consultation potentially change the doctor-patient relationship compared to face-to-face consultation? Which barriers may hinder the effective use of this consultation format in practice? We are presenting first results from a mixed-methods field study, funded by Federal Ministry of Health, which will provide the basis for a hands-on guide for both physicians and patients on how to improve the quality of video consultations. We use a quasi-experimental design to analyze qualitative and quantitative differences between face-to-face and video consultations based on video recordings of N = 64 actual counseling sessions (n = 32 for each consultation format). Data will be recorded from n = 32 gynecological and n = 32 urological cancer patients at two clinics. After the consultation, all patients will be asked to fill out a questionnaire about their consultation experience. For quantitative analyses, the counseling sessions will be systematically compared in terms of verbal and nonverbal communication patterns. Relative frequencies of eye contact and the information exchanged will be compared using 𝝌2 -tests. The validated questionnaire MAPPIN'Obsdyad will be used to assess the expression of shared decision-making parameters. In addition, semi-structured interviews will be conducted with n = 10 physicians and n = 10 patients experienced with video consultation, for which a qualitative content analysis will be conducted. We will elaborate the comprehensive methodological approach we used to compare video vs. face-to-face consultations and present first evidence on how video consultations change the doctor-patient interaction. We will also outline possible barriers of video consultations and best practices on how they may be overcome. Based on the results, we will present and discuss recommendations outlining best practices for how to prepare and conduct high-quality video consultations from the perspective of both physicians and patients.Keywords: video consultation, patient-doctor-relationship, digital applications, technical barriers
Procedia PDF Downloads 14115179 A New Categorization of Image Quality Metrics Based on a Model of Human Quality Perception
Authors: Maria Grazia Albanesi, Riccardo Amadeo
Abstract:
This study presents a new model of the human image quality assessment process: the aim is to highlight the foundations of the image quality metrics proposed in literature, by identifying the cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to create a novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effective objective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biases are not taken in account at all. We then propose a possible methodology to address this issue.Keywords: eye-tracking, image quality assessment metric, MOS, quality of user experience, visual perception
Procedia PDF Downloads 41315178 Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm
Authors: Saad M. Darwish
Abstract:
Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.Keywords: software quality, quality assurance, software certification model, software assessment
Procedia PDF Downloads 52415177 Potential Usefulness of Video Lectures as a Tool to Improve Synchronous and Asynchronous the Online Education
Authors: Omer Shujat Bhatti, Afshan Huma
Abstract:
Online educational system were considered a great opportunity for distance learning. In recent days of COVID19 pandemic, it enable the continuation of educational activities at all levels of education, from primary school to the top level universities. One of the key considered element in supporting the online educational system is video lectures. The current research explored the usefulness of the video lectures delivered to technical students of masters level with a focus on MSc Sustainable Environmental design students who have diverse backgrounds in the formal educational system. Hence they were unable to cope right away with the online system and faced communication and understanding issues in the lecture session due to internet and allied connectivity issues. Researcher used self prepared video lectures for respective subjects and provided them to the students using Youtube channel and subject based Whatsapp groups. Later, students were asked about the usefulness of the lectures towards a better understanding of the subject and an overall enhanced learning experience. More than 80% of the students appreciated the effort and requested it to be part of the overall system. Data collection was done using an online questionnaire which was prior briefed to the students with the purpose of research. It was concluded that video lectures should be considered an integral part of the lecture sessions and must be provided prior to the lecture session, ensuring a better quality of delivery. It was also recommended that the existing system must be upgraded to support the availability of these video lectures through the portal. Teachers training must be provided to help develop quality video content ensuring that is able to cover the content and courses taught.Keywords: video lectures, online distance education, synchronous instruction, asynchronous communication
Procedia PDF Downloads 11715176 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition
Procedia PDF Downloads 39915175 Translation Quality Assessment: Proposing a Linguistic-Based Model for Translation Criticism with Considering Ideology and Power Relations
Authors: Mehrnoosh Pirhayati
Abstract:
In this study, the researcher tried to propose a model of Translation Criticism (TC) regarding the phenomenon of Translation Quality Assessment (TQA). With changing the general view on re/writing as an illegal act, the researcher defined a scale for the act of translation and determined the redline of translation with other products. This research attempts to show TC as a related phenomenon to TQA. This study shows that TQA with using the rules and factors of TC as depicted in both product-oriented analysis and process-oriented analysis, determines the orientation or the level of the quality of translation. This study also depicts that TC, regarding TQA’s perspective, reveals the aim of the translation of original text and the root of ideological manipulation and re/writing. On the other hand, this study stresses the existence of a direct relationship between the linguistic materials and semiotic codes of a text or book. This study can be fruitful for translators, scholars, translation criticizers, and translation quality assessors, and also it is applicable in the area of pedagogy.Keywords: a model of translation criticism, a model of translation quality assessment, critical discourse analysis (CDA), re/writing, translation criticism (TC), translation quality assessment (TQA)
Procedia PDF Downloads 321