Search results for: screen-recorded videos
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 380

Search results for: screen-recorded videos

350 Virtual Reality and Avatars in Education

Authors: Michael Brazley

Abstract:

Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.

Keywords: virtual reality, avatars, education, XR

Procedia PDF Downloads 98
349 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 601
348 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa

Abstract:

To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality

Procedia PDF Downloads 116
347 Enhancing African Students’ Learning Experience by Creating Multilingual Resources at a South African University of Technology

Authors: Lisa Graham, Kathleen Grant

Abstract:

South Africa is a multicultural country with eleven official languages, yet most of the formal education at institutions of higher education in the country is in English. It is well known that many students, irrespective of their home language, struggle to grasp difficult scientific concepts and the same is true for students enrolled in the Extended Curriculum Programme at the Cape Peninsula University of Technology (CPUT), studying biomedical sciences. Today we are fortunate in that there is a plethora of resources available to students to research and better understand subject matter online. For example, the students often use YouTube videos to supplement the formal education provided in our course. Unfortunately, most of this material is presented in English. The rationale behind this project lies in that it is well documented that students think and grasp concepts easier in their home language and addresses the fact that the lingua franca of instruction in the field of biomedical science is English. A project aimed at addressing the lack of available resources in most of the South African languages is planned, where students studying Bachelor of Health Science in Medical Laboratory Science will collaborate with those studying Film and Video Technology to create educational videos, explaining scientific concepts in their home languages. These videos will then be published on our own YouTube channel, thereby making them accessible to fellow students, future students and anybody with interest in the subject. Research will be conducted to determine the benefit of the project as well as the published videos to the student community. It is suspected that the students engaged in making the videos will benefit in such a way as to gain further understanding of their course content, a broader appreciation of the discipline, an enhanced sense of civic responsibility, as well as greater respect for the different languages and cultures in our classes. Indeed, an increase in student engagement has been shown to play a central role in student success, and it is well noted that deeper learning and more innovative solutions take place in collaborative groups. We aim to make a meaningful contribution towards the production and repository of knowledge in multilingual teaching and learning for the benefit of the diverse student population and staff. This would strengthen language development, multilingualism, and multiculturalism at CPUT and empower and promote African languages as languages of science and education at CPUT, in other institutions of higher learning, and in South Africa as a whole.

Keywords: educational videos, multiculturalism, multilingualism, student engagement

Procedia PDF Downloads 155
346 Ontology-Navigated Tutoring System for Flipped-Mastery Model

Authors: Masao Okabe

Abstract:

Nowadays, in Japan, variety of students get into a university and one of the main roles of introductory courses for freshmen is to make such students well prepared for subsequent intermediate courses. For that purpose, the flipped-mastery model is not enough because videos usually used in a flipped classroom is not adaptive and does not fit all freshmen with different academic performances. This paper proposes an ontology-navigated tutoring system called EduGraph. Using EduGraph, students can prepare for and review a class, in a more flexibly personalizable way than by videos. Structuralizing learning materials by its ontology, EduGraph also helps students integrate what they learn as knowledge, and makes learning materials sharable. EduGraph was used for an introductory course for freshmen. This application suggests that EduGraph is effective.

Keywords: adaptive e-learning, flipped classroom, mastery learning, ontology

Procedia PDF Downloads 280
345 Investigation of Perception of Humor in Older Adults

Authors: Ng Ziyi Zoe, Yow Wei Quin

Abstract:

Humor plays a pivotal role in our interaction with people. According to the age-related positivity effect, older adults (OA) demonstrate more positive emotions and are better able to modulate negative emotional states than younger adults (YA), suggesting an increase in humor appreciation with age. However, different types of humor might show different patterns of change in appreciation with age (e.g., incongruity-resolution humor, aggressive humor, self-vs.-other-deprecating humor). Thus, we aim to explore age-related effects in the perception of different types of humor in a single study, including the impact of local slang in humor appreciation. Twenty OA aged 60-and-above and 24 YA aged 13-20 were watched four short videos (i.e., benign, violent, satire+local slang, and others-deprecating humor) and rated how funny the videos were (from a scale of 1-not funny-at-all to 5-very funny). Participants were also asked to rank the videos in the order of most- to least-entertaining. Repeated measures of ANOVA found significant main effects of age, F(3,39)=12,88, p < .001, where OA gave higher ratings than YA (M=3.20 vs. 2.63), and humor type, F(3,123)=19.66, p < .001. Post-hoc analyses revealed a significant linear contrast where benign and violent humor had the lowest ratings while others-deprecating humor had the highest ratings. No significant interaction effect was found. The distribution of ranking ratings also differed between OA and YA (e.g., preferred satire+local slang and others-deprecating humor vs. overwhelmingly preferred other-deprecating humor, respectively). Overall, OA displayed a greater appreciation across various types of humor than YA. Humor perception will be discussed in the larger context of cognitive, societal, and cultural implications.

Keywords: humor, older adults, perception, age differences

Procedia PDF Downloads 174
344 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network

Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba

Abstract:

Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.

Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network

Procedia PDF Downloads 232
343 Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos

Authors: Naydelis Brito Suárez, Deni Librado Torres Román, Fernando Hermosillo Reynoso

Abstract:

Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time.

Keywords: convex hull, dynamic ROI detection, pixel entropy, time series, moving objects

Procedia PDF Downloads 74
342 Representation of the Iranian Community in the Videos of the Instagram Page of the World Health Organization Representative in Iran

Authors: Naeemeh Silvari

Abstract:

The phenomenon of the spread and epidemic of the corona virus caused many aspects of the social life of the people of the world to face various challenges. In this regard, and in order to improve the living conditions of the people, the World Health Organization has tried to publish the necessary instructions for its contacts in the world in the form of its media capacities. Considering the importance of cultural differences in the discussion of health communication and the distinct needs of people in different societies, some production contents were produced and published exclusively. This research has studied six videos published on the official page of the World Health Organization in Iran as a case study. The published content has the least semantic affinity with Iranian culture, and it has been tried to show a uniform image of the Middle East with the predominance of the image of the culture of the developing Arab countries.

Keywords: corona, representation, semiotics, instagram, health communication

Procedia PDF Downloads 93
341 Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

Authors: Anton Stadler, Thorsten Ike

Abstract:

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Keywords: low density, optical flow, upward smoke motion, video based smoke detection

Procedia PDF Downloads 354
340 Can the Intervention of SCAMPER Bring about Changes of Neural Activation While Taking Creativity Tasks?

Authors: Yu-Chu Yeh, WeiChin Hsu, Chih-Yen Chang

Abstract:

Substitution, combination, modification, putting to other uses, elimination, and rearrangement (SCAMPER) has been regarded as an effective technique that provides a structured way to help people to produce creative ideas and solutions. Although some neuroscience studies regarding creativity training have been conducted, no study has focused on SCAMPER. This study therefore aimed at examining whether the learning of SCAMPER through video tutorials would result in alternations of neural activation. Thirty college students were randomly assigned to the experimental group or the control group. The experimental group was requested to watch SCAMPER videos, whereas the control group was asked to watch natural-scene videos which were regarded as neutral stimulating materials. Each participant was brain scanned in a Functional magnetic resonance imaging (fMRI) machine while undertaking a creativity test before and after watching the videos. Furthermore, a two-way ANOVA was used to analyze the interaction between groups (the experimental group; the control group) and tasks (C task; M task; X task). The results revealed that the left precuneus significantly activated in the interaction of groups and tasks, as well as in the main effect of group. Furthermore, compared with the control group, the experimental group had greater activation in the default mode network (left precuneus and left inferior parietal cortex) and the motor network (left postcentral gyrus and left supplementary area). The findings suggest that the SCAMPER training may facilitate creativity through the stimulation of the default mode network and the motor network.

Keywords: creativity, default mode network, neural activation, SCAMPER

Procedia PDF Downloads 100
339 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.

Keywords: distortion, metrics, performance, resolution, video quality assessment

Procedia PDF Downloads 203
338 The Effect of Reminiscence Therapy with Ethernet-Based Videos on Cognition and Apathy in Elderly with Mild Dementia

Authors: Ayse Inel Manav, Nuray Simsek

Abstract:

The number of people with dementia and the problems that are experienced by these people are increasing along with aging world population. This study was carried out to assess the effects of reminiscence therapy using internet videos on the cognitive condition and apathy levels of elderly people who had mild dementia and lived in nursing homes. This randomly controlled experimental study was conducted between 25 May-25 August 2016 in the nursing home, elderly care and rehabilitation centers in Adana and Seyhan, Turkey. A total of 32 individuals participated in this study, 16 in the experimental group and 16 in the control group. Data were collected using a personal information form developed on the basis of the published literature, the Standardized Mini Mental Test (SMMT) and the Apathy Rating Scale (ARS). The Clinical Research Ethics Committee's approval, written institutional permission, and the written consent of the participants were obtained before data collection. The individuals in the experimental group received reminiscence therapy using internet videos for 60 minutes one day a week for three months. During the same period, 25-30 minutes of unstructured interviews on subjects unrelated to reminiscence were carried out with individuals in the control group. The SMMT and ARS were administered before the applications in the experimental group and at the end of the third month. The collected data were analyzed using descriptive statistics (means, standard deviations, and frequencies) as well as Student's t-test, the Mann-Whitney U-test, and Wilcoxon's signed ranks test. In this study, the total SMMT post-test scores of the experimental group were higher than those of the control group (p = 0.001; p < 0.01). There was a difference between experimental and control groups' total SMMT post-test scores (p = 0.001; p < 0.01). The experimental group's ARS total post-test scores were higher than those of the control group (p = 0.001; p < 0.01). This study found that group reminiscence therapy using internet videos improved the cognitive functions and apathy levels of elderly individuals with mild dementia.

Keywords: apaty, cognitive testing, dementia, elderly, reminisence threapy

Procedia PDF Downloads 196
337 Student-Created Videos to Foster Active Learning in Heat Transfer Course

Authors: W.Appamana, S. Jantasee, P. Siwarasak, T. Mueansichai, C. Kaewbuddee

Abstract:

Heat transfer is important in chemical engineering field. We have to know how to predict rates of heat transfer in a variety of process situations. Therefore, heat transfer learning is one of the greatest challenges for undergraduate students in chemical engineering. To enhance student learning in classroom, active-learning method was proposed in a single classroom, using problems based on videos and creating video, think-pair-share and jigsaw technique. The result shows that active learning method can prevent copying of the solutions manual for students and improve average examination scores about 5% when comparing with students in traditional section. Overall, this project represents an effective type of class that motivates student-centric learning while enhancing self-motivation, creative thinking and critical analysis among students.

Keywords: active learning, student-created video, self-motivation, creative thinking

Procedia PDF Downloads 235
336 Smartphone Video Source Identification Based on Sensor Pattern Noise

Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.

Keywords: digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification

Procedia PDF Downloads 428
335 Sentiment Analysis on the East Timor Accession Process to the ASEAN

Authors: Marcelino Caetano Noronha, Vosco Pereira, Jose Soares Pinto, Ferdinando Da C. Saores

Abstract:

One particularly popular social media platform is Youtube. It’s a video-sharing platform where users can submit videos, and other users can like, dislike or comment on the videos. In this study, we conduct a binary classification task on YouTube’s video comments and review from the users regarding the accession process of Timor Leste to become the eleventh member of the Association of South East Asian Nations (ASEAN). We scrape the data directly from the public YouTube video and apply several pre-processing and weighting techniques. Before conducting the classification, we categorized the data into two classes, namely positive and negative. In the classification part, we apply Support Vector Machine (SVM) algorithm. By comparing with Naïve Bayes Algorithm, the experiment showed SVM achieved 84.1% of Accuracy, 94.5% of Precision, and Recall 73.8% simultaneously.

Keywords: classification, YouTube, sentiment analysis, support sector machine

Procedia PDF Downloads 108
334 A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding

Authors: R. S. Remya, U. S. Sethulekshmi

Abstract:

Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background.

Keywords: discrete wavelet transform, optical flow, optical flow variation, video tampering

Procedia PDF Downloads 359
333 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 167
332 Upgrading of Problem-Based Learning with Educational Multimedia to the Undergraduate Students

Authors: Sharifa Alduraibi, Abir El Sadik, Ahmed Elzainy, Alaa Alduraibi, Ahmed Alsolai

Abstract:

Introduction: Problem-based learning (PBL) is an active student-centered educational modality, influenced by the students' interest that required continuous motivation to improve their engagement. The new era of professional information technology facilitated the utilization of educational multimedia, such as videos, soundtracks, and photographs promoting students' learning. The aim of the present study was to introduce multimedia-enriched PBL scenarios for the first time in college of medicine, Qassim University, as an incentive for better students' engagement. In addition, students' performance and satisfaction were evaluated. Methodology: Two multimedia-enhanced PBL scenarios were implemented to the third years' students in the urinary system block. Radiological images, plain CT scan, and X-ray of the abdomen and renal nuclear scan correlated with their pathological gross photographs were added to the scenarios. One week before the first sessions, pre-recorded orientation videos for PBL tutors were submitted to clarify the multimedia incorporated in the scenarios. Other two traditional PBL scenarios devoid of multimedia demonstrating the pathological and radiological findings were designed. Results and Discussion: Comparison between the formative assessments' results by the end of the two PBL modalities was done. It revealed significant increase in students' engagement, critical thinking and practical reasoning skills during the multimedia-enhanced sessions. Students' perception survey showed great satisfaction with the new strategy. Conclusion: It could be concluded from the current work that multimedia created technology-based teaching strategy inspiring the student for self-directed thinking and promoting students' overall achievement.

Keywords: multimedia, pathology and radiology images, problem-based learning, videos

Procedia PDF Downloads 157
331 Effectiveness of Video Interventions for Perpetrators of Domestic Violence

Authors: Zeynep Turhan

Abstract:

Digital tools can improve knowledge and awareness of strategies and skills for healthy and respectful intimate relationships. The website of the Healthy and Respectful Relationship Program has been developed and included five key videos about how to build healthy intimate relationships. This study examined the perspectives about informative videos by focusing on how individuals learn new information or challenge their preconceptions or attitudes regarding male privilege and women's oppression. Five individuals who received no-contact orders and attended group intervention were the sample of this study. The observation notes were the major methodology examining how participants responded to video tools. The data analysis method was the interpretative phenomenological analysis. The results showed that many participants found the tools useful in learning the types of violence and communication strategies. Nevertheless, obstacles to implementing some techniques were found in their relationships. These digital tools might enhance healthy and respectful relationships despite some limitations.

Keywords: healthy relationship, digital tools, intimate partner violence, perpetrators, video interventions

Procedia PDF Downloads 95
330 Learning from TikTok Food Pranks to Promote Food Saving Among Adolescents

Authors: Xuan (Iris) Li, Jenny Zhengye Hou, Greg Hearn

Abstract:

Food waste is a global issue, with an estimated 30% to 50% of food created never being consumed. Therefore, it is vital to reduce food waste and convert wasted food into recyclable outputs. TikTok provides a simple way of creating and duetting videos in just a few steps by using templates with the same sound/vision/caption effects to produce personalized content – this is called a duet, which is revealing to study the impact of TikTok on wasting more food or saving food. The research focuses on examining food-related content on TikTok, with particular attention paid to two distinct themes, food waste pranks and food-saving practices, to understand the potential impacts of these themes on adolescents and their attitudes toward sustainable food consumption practices. Specifically, the analysis explores how TikTok content related to food waste and/or food saving may contribute to the normalization and promotion of either positive or negative food behaviours among young viewers. The research employed content analysis and semi-structured interviews to understand what factors contribute to the difference in popularity between food pranks and food-saving videos and insights from the former can be applied to the latter to increase their communication effectiveness. The first category of food content on TikTok under examination pertains to food waste, including videos featuring pranks and mukbang. These forms of content have the potential to normalize or even encourage food waste behaviours among adolescents, exacerbating the already significant food waste problem. The second category of TikTok food content under examination relates to food saving, for example, videos teaching viewers how to maximize the use of food to reduce waste. This type of content can potentially empower adolescents to act against food waste and foster positive and sustainable food practices in their communities. The initial findings of the study suggest that TikTok content related to pranks appears to be more popular among viewers than content focused on teaching people how to save food. Additionally, these types of videos are gaining fans at a faster rate than content promoting more sustainable food practices. However, we argue there is a great potential for social media platforms like TikTok to play an educative role in promoting positive behaviour change among young people by sharing engaging content suitable to target audiences. This research serves as the first to investigate the potential utility of TikTok in food waste reduction and underscores the important role social media platforms can play in promoting sustainable food practices. The findings will help governments, organizations, and communities promote tailored and effective interventions to reduce food waste and help achieve the United Nations’ sustainable development goal of halving food waste by 2030.

Keywords: food waste reduction, behaviour, social media, TikTok, adolescents

Procedia PDF Downloads 77
329 A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment

Authors: M. Prema Kumar, P. Rajesh Kumar

Abstract:

The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy.

Keywords: multi sensor image fusion, MSVD, image processing, monochrome video

Procedia PDF Downloads 572
328 Automated Tracking and Statistics of Vehicles at the Signalized Intersection

Authors: Qiang Zhang, Xiaojian Hu1

Abstract:

Intersection is the place where vehicles and pedestrians must pass through, turn and evacuate. Obtaining the motion data of vehicles near the intersection is of great significance for transportation research. Since there are usually many targets and there are more conflicts between targets, this makes it difficult to obtain vehicle motion parameters in traffic videos of intersections. According to the characteristics of traffic videos, this paper applies video technology to realize the automated track, count and trajectory extraction of vehicles to collect traffic data by roadside surveillance cameras installed near the intersections. Based on the video recognition method, the vehicles in each lane near the intersection are tracked with extracting trajectory and counted respectively in various degrees of occlusion and visibility. The performances are compared with current recognized CPU-based algorithms of real-time tracking-by-detection. The speed of the presented system is higher than the others and the system has a better real-time performance. The accuracy of direction has reached about 94.99% on average, and the accuracy of classification and statistics has reached about 75.12% on average.

Keywords: tracking and statistics, vehicle, signalized intersection, motion parameter, trajectory

Procedia PDF Downloads 221
327 The Language of Fliptop among Filipino Youth: A Discourse Analysis

Authors: Bong Borero Lumabao

Abstract:

This qualitative research is a study on the lines of Fliptop talks performed by the Fliptop rappers employing Finnegan’s (2008) discourse analysis. This paper aimed to analyze the phonological, morphological, and semantic features of the fliptop talk, to explore the structures in the lines of Fliptop among Filipino youth, and to uncover the various insights that can be gained from it. The corpora of the study included all the 20 Fliptop Videos downloaded from the Youtube Channel of Fliptop. Results revealed that Fliptop contains phonological features such as assonance, consonance, deletion, lengthening, and rhyming. Morphological features include acronym, affixation, blending, borrowing, code-mixing and switching, compounding, conversion or functional shifts, and dysphemism. Semantics presented the lexical category, meaning, and words used in the fliptop talks. Structure of Fliptop revolves on the personal attack (physical attributes), attack on the bars (rapping skills), extension: family members and friends, antithesis, profane words, figurative languages, sexual undertones, anime characters, homosexuality, and famous celebrities involvement.

Keywords: discourse analysis, fliptop talks, filipino youth, fliptop videos, Philippines

Procedia PDF Downloads 242
326 VideoAssist: A Labelling Assistant to Increase Efficiency in Annotating Video-Based Fire Dataset Using a Foundation Model

Authors: Keyur Joshi, Philip Dietrich, Tjark Windisch, Markus König

Abstract:

In the field of surveillance-based fire detection, the volume of incoming data is increasing rapidly. However, the labeling of a large industrial dataset is costly due to the high annotation costs associated with current state-of-the-art methods, which often require bounding boxes or segmentation masks for model training. This paper introduces VideoAssist, a video annotation solution that utilizes a video-based foundation model to annotate entire videos with minimal effort, requiring the labeling of bounding boxes for only a few keyframes. To the best of our knowledge, VideoAssist is the first method to significantly reduce the effort required for labeling fire detection videos. The approach offers bounding box and segmentation annotations for the video dataset with minimal manual effort. Results demonstrate that the performance of labels annotated by VideoAssist is comparable to those annotated by humans, indicating the potential applicability of this approach in fire detection scenarios.

Keywords: fire detection, label annotation, foundation models, object detection, segmentation

Procedia PDF Downloads 6
325 Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth

Authors: Valentina Zhang

Abstract:

While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach.

Keywords: neurodivergence care, facial emotion recognition, deep learning, ground truth for supervised learning

Procedia PDF Downloads 147
324 Clustering Color Space, Time Interest Points for Moving Objects

Authors: Insaf Bellamine, Hamid Tairi

Abstract:

Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies.

Keywords: Color Space-Time Interest Points (CSTIP), Color Structure-Texture Image Decomposition, Motion Detection, clustering

Procedia PDF Downloads 378
323 Engagement Analysis Using DAiSEE Dataset

Authors: Naman Solanki, Souraj Mondal

Abstract:

With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.

Keywords: computer vision, engagement prediction, deep learning, multi-level classification

Procedia PDF Downloads 114
322 Human Action Retrieval System Using Features Weight Updating Based Relevance Feedback Approach

Authors: Munaf Rashid

Abstract:

For content-based human action retrieval systems, search accuracy is often inferior because of the following two reasons 1) global information pertaining to videos is totally ignored, only low level motion descriptors are considered as a significant feature to match the similarity between query and database videos, and 2) the semantic gap between the high level user concept and low level visual features. Hence, in this paper, we propose a method that will address these two issues and in doing so, this paper contributes in two ways. Firstly, we introduce a method that uses both global and local information in one framework for an action retrieval task. Secondly, to minimize the semantic gap, a user concept is involved by incorporating features weight updating (FWU) Relevance Feedback (RF) approach. We use statistical characteristics to dynamically update weights of the feature descriptors so that after every RF iteration feature space is modified accordingly. For testing and validation purpose two human action recognition datasets have been utilized, namely Weizmann and UCF. Results show that even with a number of visual challenges the proposed approach performs well.

Keywords: relevance feedback (RF), action retrieval, semantic gap, feature descriptor, codebook

Procedia PDF Downloads 472
321 The Efficacy of Video Education to Improve Treatment or Illness-Related Knowledge in Patients with a Long-Term Physical Health Condition: A Systematic Review

Authors: Megan Glyde, Louise Dye, David Keane, Ed Sutherland

Abstract:

Background: Typically patient education is provided either verbally, in the form of written material, or with a multimedia-based tool such as videos, CD-ROMs, DVDs, or via the internet. By providing patients with effective educational tools, this can help to meet their information needs and subsequently empower these patients and allow them to participate within medical-decision making. Video education may have some distinct advantages compared to other modalities. For instance, whilst eHealth is emerging as a promising modality of patient education, an individual’s ability to access, read, and navigate through websites or online modules varies dramatically in relation to health literacy levels. Literacy levels may also limit patients’ ability to understand written education, whereas video education can be watched passively by patients and does not require high literacy skills. Other benefits of video education include that the same information is provided consistently to each patient, it can be a cost-effective method after the initial cost of producing the video, patients can choose to watch the videos by themselves or in the presence of others, and they can pause and re-watch videos to suit their needs. Health information videos are not only viewed by patients in formal educational sessions, but are increasingly being viewed on websites such as YouTube. Whilst there is a lot of anecdotal and sometimes misleading information on YouTube, videos from government organisations and professional associations contain trustworthy and high-quality information and could enable YouTube to become a powerful information dissemination platform for patients and carers. This systematic review will examine the efficacy of video education to improve treatment or illness-related knowledge in patients with various long-term conditions, in comparison to other modalities of education. Methods: Only studies which match the following criteria will be included: participants will have a long-term physical health condition, video education will aim to improve treatment or illness related knowledge and will be tested in isolation, and the study must be a randomised controlled trial. Knowledge will be the primary outcome measure, with modality preference, anxiety, and behaviour change as secondary measures. The searches have been conducted in the following databases: OVID Medline, OVID PsycInfo, OVID Embase, CENTRAL and ProQuest, and hand searching for relevant published and unpublished studies has also been carried out. Screening and data extraction will be conducted independently by 2 researchers. Included studies will be assessed for their risk of bias in accordance with Cochrane guidelines, and heterogeneity will also be assessed before deciding whether a meta-analysis is appropriate or not. Results and Conclusions: Appropriate synthesis of the studies in relation to each outcome measure will be reported, along with the conclusions and implications.

Keywords: long-term condition, patient education, systematic review, video

Procedia PDF Downloads 113