Search results for: live video streaming system
19300 Lecture Video Indexing and Retrieval Using Topic Keywords
Authors: B. J. Sandesh, Saurabha Jirgi, S. Vidya, Prakash Eljer, Gowri Srinivasa
Abstract:
In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users.Keywords: video indexing and retrieval, lecture videos, content based video search, multimodal indexing
Procedia PDF Downloads 25119299 Flow Conservation Framework for Monitoring Software Defined Networks
Authors: Jesús Antonio Puente Fernández, Luis Javier Garcia Villalba
Abstract:
New trends on streaming videos such as series or films require a high demand of network resources. This fact results in a huge problem within traditional IP networks due to the rigidity of its architecture. In this way, Software Defined Networks (SDN) is a new concept of network architecture that intends to be more flexible and it simplifies the management in networks with respect to the existing ones. These aspects are possible due to the separation of control plane (controller) and data plane (switches). Taking the advantage of this separated control, it is easy to deploy a monitoring tool independent of device vendors since the existing ones are dependent on the installation of specialized and expensive hardware. In this paper, we propose a framework that optimizes the traffic monitoring in SDN networks that decreases the number of monitoring queries to improve the network traffic and also reduces the overload. The performed experiments (with and without the optimization) using a video streaming delivery between two hosts demonstrate the feasibility of our monitoring proposal.Keywords: optimization, monitoring, software defined networking, statistics, query
Procedia PDF Downloads 33319298 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework
Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi
Abstract:
There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.Keywords: video lectures, big video data, video retrieval, hadoop
Procedia PDF Downloads 53719297 A 5G Architecture Based to Dynamic Vehicular Clustering Enhancing VoD Services Over Vehicular Ad hoc Networks
Authors: Lamaa Sellami, Bechir Alaya
Abstract:
Nowadays, video-on-demand (VoD) applications are becoming one of the tendencies driving vehicular network users. In this paper, considering the unpredictable vehicle density, the unexpected acceleration or deceleration of the different cars included in the vehicular traffic load, and the limited radio range of the employed communication scheme, we introduce the “Dynamic Vehicular Clustering” (DVC) algorithm as a new scheme for video streaming systems over VANET. The proposed algorithm takes advantage of the concept of small cells and the introduction of wireless backhauls, inspired by the different features and the performance of the Long Term Evolution (LTE)- Advanced network. The proposed clustering algorithm considers multiple characteristics such as the vehicle’s position and acceleration to reduce latency and packet loss. Therefore, each cluster is counted as a small cell containing vehicular nodes and an access point that is elected regarding some particular specifications.Keywords: video-on-demand, vehicular ad-hoc network, mobility, vehicular traffic load, small cell, wireless backhaul, LTE-advanced, latency, packet loss
Procedia PDF Downloads 14219296 Video Stabilization Using Feature Point Matching
Authors: Shamsundar Kulkarni
Abstract:
Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.Keywords: video stabilization, point feature matching, salient points, image quality measurement
Procedia PDF Downloads 31319295 Structural Analysis on the Composition of Video Game Virtual Spaces
Authors: Qin Luofeng, Shen Siqi
Abstract:
For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture.Keywords: video game, virtual space, narrativity, social space, emotional connection
Procedia PDF Downloads 27019294 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information
Procedia PDF Downloads 26719293 Design and Implementation of Bluetooth Controlled Autonomous Vehicle
Authors: Amanuel Berhanu Kesamo
Abstract:
This paper presents both circuit simulation and hardware implementation of a robot vehicle that can be either controlled manually via Bluetooth with video streaming or navigate autonomously to a target point by avoiding obstacles. In manual mode, the user controls the mobile robot using C# windows form interfaced via Bluetooth. The camera mounted on the robot is used to capture and send the real time video to the user. In autonomous mode, the robot plans the shortest path to the target point while avoiding obstacles along the way. Ultrasonic sensor is used for sensing the obstacle in its environment. An efficient path planning algorithm is implemented to navigate the robot along optimal route.Keywords: Arduino Uno, autonomous, Bluetooth module, path planning, remote controlled robot, ultra sonic sensor
Procedia PDF Downloads 14319292 Potential Usefulness of Video Lectures as a Tool to Improve Synchronous and Asynchronous the Online Education
Authors: Omer Shujat Bhatti, Afshan Huma
Abstract:
Online educational system were considered a great opportunity for distance learning. In recent days of COVID19 pandemic, it enable the continuation of educational activities at all levels of education, from primary school to the top level universities. One of the key considered element in supporting the online educational system is video lectures. The current research explored the usefulness of the video lectures delivered to technical students of masters level with a focus on MSc Sustainable Environmental design students who have diverse backgrounds in the formal educational system. Hence they were unable to cope right away with the online system and faced communication and understanding issues in the lecture session due to internet and allied connectivity issues. Researcher used self prepared video lectures for respective subjects and provided them to the students using Youtube channel and subject based Whatsapp groups. Later, students were asked about the usefulness of the lectures towards a better understanding of the subject and an overall enhanced learning experience. More than 80% of the students appreciated the effort and requested it to be part of the overall system. Data collection was done using an online questionnaire which was prior briefed to the students with the purpose of research. It was concluded that video lectures should be considered an integral part of the lecture sessions and must be provided prior to the lecture session, ensuring a better quality of delivery. It was also recommended that the existing system must be upgraded to support the availability of these video lectures through the portal. Teachers training must be provided to help develop quality video content ensuring that is able to cover the content and courses taught.Keywords: video lectures, online distance education, synchronous instruction, asynchronous communication
Procedia PDF Downloads 11719291 Using Variation Theory in a Design-based Approach to Improve Learning Outcomes of Teachers Use of Video and Live Experiments in Swedish Upper Secondary School
Authors: Andreas Johansson
Abstract:
Conceptual understanding needs to be grounded on observation of physical phenomena, experiences or metaphors. Observation of physical phenomena using demonstration experiments has a long tradition within physics education and students need to develop mental models to relate the observations to concepts from scientific theories. This study investigates how live and video experiments involving an acoustic trap to visualize particle-field interaction, field properties and particle properties can help develop students' mental models and how they can be used differently to realize their potential as teaching tools. Initially, they were treated as analogs and the lesson designs were kept identical. With a design-based approach, the experimental and video designs, as well as best practices for a respective teaching tool, were then developed in iterations. Variation theory was used as a theoretical framework to analyze the planned respective realized pattern of variation and invariance in order to explain learning outcomes as measured by a pre-posttest consisting of conceptual multiple-choice questions inspired by the Force Concept Inventory and the Force and Motion Conceptual Evaluation. Interviews with students and teachers were used to inform the design of experiments and videos in each iteration. The lesson designs and the live and video experiments has been developed to help teachers improve student learning and make school physics more interesting by involving experimental setups that usually are out of reach and to bridge the gap between what happens in classrooms and in science research. As students’ conceptual knowledge also rises their interest in physics the aim is to increase their chances of pursuing careers within science, technology, engineering or mathematics.Keywords: acoustic trap, design-based research, experiments, variation theory
Procedia PDF Downloads 8419290 Video-Based System for Support of Robot-Enhanced Gait Rehabilitation of Stroke Patients
Authors: Matjaž Divjak, Simon Zelič, Aleš Holobar
Abstract:
We present a dedicated video-based monitoring system for quantification of patient’s attention to visual feedback during robot assisted gait rehabilitation. Two different approaches for eye gaze and head pose tracking are tested and compared. Several metrics for assessment of patient’s attention are also presented. Experimental results with healthy volunteers demonstrate that unobtrusive video-based gaze tracking during the robot-assisted gait rehabilitation is possible and is sufficiently robust for quantification of patient’s attention and assessment of compliance with the rehabilitation therapy.Keywords: video-based attention monitoring, gaze estimation, stroke rehabilitation, user compliance
Procedia PDF Downloads 42619289 Free to Select vTuber Avatar eLearning Video for University Ray Tracing Course
Authors: Rex Hsieh, Kosei Yamamura, Satoshi Cho, Hisashi Sato
Abstract:
This project took place in the fall semester of 2019 from September 2019 to February 2020. It improves upon the design of a previous vTuber based eLearning video system by correcting criticisms from students and enhancing the positive aspects of the previous system. The transformed audio which has proven to be ineffective in previous experiments was not used in this experiment. The result is videos featuring 3 avatars covering different Ray Tracing subject matters being released weekly. Students are free to pick which videos they want to watch and can also re-watch any videos they want. The students' subjective impressions of each video is recorded and analysed to help further improve the system.Keywords: vTuber, eLearning, Ray Tracing, Avatar
Procedia PDF Downloads 18819288 Analysis of Q-Learning on Artificial Neural Networks for Robot Control Using Live Video Feed
Authors: Nihal Murali, Kunal Gupta, Surekha Bhanot
Abstract:
Training of artificial neural networks (ANNs) using reinforcement learning (RL) techniques is being widely discussed in the robot learning literature. The high model complexity of ANNs along with the model-free nature of RL algorithms provides a desirable combination for many robotics applications. There is a huge need for algorithms that generalize using raw sensory inputs, such as vision, without any hand-engineered features or domain heuristics. In this paper, the standard control problem of line following robot was used as a test-bed, and an ANN controller for the robot was trained on images from a live video feed using Q-learning. A virtual agent was first trained in simulation environment and then deployed onto a robot’s hardware. The robot successfully learns to traverse a wide range of curves and displays excellent generalization ability. Qualitative analysis of the evolution of policies, performance and weights of the network provide insights into the nature and convergence of the learning algorithm.Keywords: artificial neural networks, q-learning, reinforcement learning, robot learning
Procedia PDF Downloads 37319287 Resource Allocation Scheme For IEEE802.16 Networks
Authors: Elmabruk Laias
Abstract:
IEEE Standard 802.16 provides QoS (Quality of Service) for the applications such as Voice over IP, video streaming and high bandwidth file transfer. With the ability of broadband wireless access of an IEEE 802.16 system, a WiMAX TDD frame contains one downlink subframe and one uplink subframe. The capacity allocated to each subframe is a system parameter that should be determined based on the expected traffic conditions. a proper resource allocation scheme for packet transmissions is imperatively needed. In this paper, we present a new resource allocation scheme, called additional bandwidth yielding (ABY), to improve transmission efficiency of an IEEE 802.16-based network. Our proposed scheme can be adopted along with the existing scheduling algorithms and the multi-priority scheme without any change. The experimental results show that by using our ABY, the packet queuing delay could be significantly improved, especially for the service flows of higher-priority classes.Keywords: IEEE 802.16, WiMAX, OFDMA, resource allocation, uplink-downlink mapping
Procedia PDF Downloads 47619286 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 3419285 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition
Procedia PDF Downloads 39819284 Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital
Authors: Wieke Ellen Bouwes
Abstract:
This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods.Keywords: video consultations, digital literacy skills, effectiveness of support, intra- and inter-organizational relationships, patient acceptance of video consultations
Procedia PDF Downloads 7419283 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences
Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng
Abstract:
Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).Keywords: motion detection, motion tracking, trajectory analysis, video surveillance
Procedia PDF Downloads 54819282 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids
Authors: Ayalew Yimam Ali
Abstract:
The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.
Procedia PDF Downloads 2219281 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 2319280 A Low-Cost Vision-Based Unmanned Aerial System for Extremely Low-Light GPS-Denied Navigation and Thermal Imaging
Authors: Chang Liu, John Nash, Stephen D. Prior
Abstract:
This paper presents the design and implementation details of a complete unmanned aerial system (UAS) based on commercial-off-the-shelf (COTS) components, focusing on safety, security, search and rescue scenarios in GPS-denied environments. In particular, the aerial platform is capable of semi-autonomously navigating through extremely low-light, GPS-denied indoor environments based on onboard sensors only, including a downward-facing optical flow camera. Besides, an additional low-cost payload camera system is developed to stream both infrared video and visible light video to a ground station in real-time, for the purpose of detecting sign of life and hidden humans. The total cost of the complete system is estimated to be $1150, and the effectiveness of the system has been tested and validated in practical scenarios.Keywords: unmanned aerial system, commercial-off-the-shelf, extremely low-light, GPS-denied, optical flow, infrared video
Procedia PDF Downloads 32819279 Multimodal Employee Attendance Management System
Authors: Khaled Mohammed
Abstract:
This paper presents novel face recognition and identification approaches for the real-time attendance management problem in large companies/factories and government institutions. The proposed uses the Minimum Ratio (MR) approach for employee identification. Capturing the authentic face variability from a sequence of video frames has been considered for the recognition of faces and resulted in system robustness against the variability of facial features. Experimental results indicated an improvement in the performance of the proposed system compared to the Previous approaches at a rate between 2% to 5%. In addition, it decreased the time two times if compared with the Previous techniques, such as Extreme Learning Machine (ELM) & Multi-Scale Structural Similarity index (MS-SSIM). Finally, it achieved an accuracy of 99%.Keywords: attendance management system, face detection and recognition, live face recognition, minimum ratio
Procedia PDF Downloads 15519278 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance
Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass
Abstract:
For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance
Procedia PDF Downloads 34919277 Subjective Quality Assessment for Impaired Videos with Varying Spatial and Temporal Information
Authors: Muhammad Rehan Usman, Muhammad Arslan Usman, Soo Young Shin
Abstract:
The new era of digital communication has brought up many challenges that network operators need to overcome. The high demand of mobile data rates require improved networks, which is a challenge for the operators in terms of maintaining the quality of experience (QoE) for their consumers. In live video transmission, there is a sheer need for live surveillance of the videos in order to maintain the quality of the network. For this purpose objective algorithms are employed to monitor the quality of the videos that are transmitted over a network. In order to test these objective algorithms, subjective quality assessment of the streamed videos is required, as the human eye is the best source of perceptual assessment. In this paper we have conducted subjective evaluation of videos with varying spatial and temporal impairments. These videos were impaired with frame freezing distortions so that the impact of frame freezing on the quality of experience could be studied. We present subjective Mean Opinion Score (MOS) for these videos that can be used for fine tuning the objective algorithms for video quality assessment.Keywords: frame freezing, mean opinion score, objective assessment, subjective evaluation
Procedia PDF Downloads 49519276 Intrusion Detection Based on Graph Oriented Big Data Analytics
Authors: Ahlem Abid, Farah Jemili
Abstract:
Intrusion detection has been the subject of numerous studies in industry and academia, but cyber security analysts always want greater precision and global threat analysis to secure their systems in cyberspace. To improve intrusion detection system, the visualisation of the security events in form of graphs and diagrams is important to improve the accuracy of alerts. In this paper, we propose an approach of an IDS based on cloud computing, big data technique and using a machine learning graph algorithm which can detect in real time different attacks as early as possible. We use the MAWILab intrusion detection dataset . We choose Microsoft Azure as a unified cloud environment to load our dataset on. We implement the k2 algorithm which is a graphical machine learning algorithm to classify attacks. Our system showed a good performance due to the graphical machine learning algorithm and spark structured streaming engine.Keywords: Apache Spark Streaming, Graph, Intrusion detection, k2 algorithm, Machine Learning, MAWILab, Microsoft Azure Cloud
Procedia PDF Downloads 14919275 The Driving Force for Taiwan Social Innovation Business Model Transformation: A Case Study of Social Innovation Internet Celebrity Training Project
Authors: Shih-Jie Ma, Jui-Hsu Hsiao, Ming-Ying Hsieh, Shin-Yan Yang, Chun-Han Yeh, Kuo-Chun Su
Abstract:
In Taiwan, social enterprises and non-profit organizations (NPOs) are not familiar with innovative business models, such as live streaming. In 2019, a brand new course called internet celebrity training project is introduced to them by the Social Innovation Lab. The Goal of this paper is to evaluate the effect of this project, to explore the role of new technology (internet live stream) in business process management (BPM), and to analyze how live stream programs can assist social enterprises in creating new business models. Social Innovation, with the purpose to solve social issues in innovative ways, is one of the most popular topics in the world. Social Innovation Lab was established in 2017 by Executive Yuan in Taiwan. The vision of Social Innovation Lab is to exploit technology, innovation and experimental methods to solve social issues, and to maximize the benefits from government investment. Social Innovation Lab aims at creating a platform for both supply and demand sides of social issues, to make social enterprises and start-ups communicate with each other, and to build an eco-system in which stakeholders can make a social impact. Social Innovation Lab keeps helping social enterprises and NPOs to gain better publicity and to enhance competitiveness by facilitating digital transformation. In this project, Social Innovation Lab exerted the influence of social media such as YouTube and Facebook, to make social enterprises and start-ups adjust their business models by using the live stream of social media, which becomes one of the tools to expand their market and diversify their sales channels. Internet live stream training courses were delivered in different regions of Taiwan in 2019, including Taitung, Taichung, Kaohsiung and Hualien. Through these courses, potential groups and enterprises were cultivated to become so-called internet celebrities. With their concern about social issues in mind, these internet celebrities know how to manipulate social media to make a social impact in different fields, such as aboriginal people, food and agriculture, LOHAS (Lifestyles of Health and Sustainability), environmental protection and senior citizens. Participants of live stream training courses in Taiwan are selected to take in-depth interviews and questionnaire surveys. Results indicate that the digital transformation process of social enterprises and NPOs can be successful by implementing business process reengineering, a significant change made by social innovation internet celebrities. Therefore, this project can be the new driving force to facilitate the business model transformation in Taiwan.Keywords: business process management, digital transformation, live stream, social innovation
Procedia PDF Downloads 14719274 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 18819273 Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content
Authors: Maryam Azimi, Amin Banitalebi-Dehkordi, Yuanyuan Dong, Mahsa T. Pourazad, Panos Nasiopoulos
Abstract:
While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions.Keywords: HDR, dynamic range, LDR, subjective evaluation, video compression, HEVC, video quality metrics
Procedia PDF Downloads 52919272 Extending Image Captioning to Video Captioning Using Encoder-Decoder
Authors: Sikiru Ademola Adewale, Joe Thomas, Bolanle Hafiz Matti, Tosin Ige
Abstract:
This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.Keywords: decoder, encoder, many-to-many mapping, video captioning, 2-gram BLEU
Procedia PDF Downloads 10919271 Night Patrolling Robot for Suspicious Activity Detection
Authors: Amruta Amune, Rohit Agrawal, Yashashree Shastri, Syeda Zarah Aiman, Rutuja Rathi, Vaishnav Suryawanshi, Sameer Sumbhe
Abstract:
Every human being needs a sense of security. The requirement for security has risen in proportion with the population growth. However, because of a scarcity of resources, effective protection is not possible. It costs a lot of money to get appropriate security that not many can handle or afford. The goal of the study was to find a solution to the issue by developing a system capable of providing strong protection at a very low cost when long-term benefits are taken into account. The objective was to design and develop a robot that could travel around and survey the region and inform the command center if anything unusual was found. The system will be controlled manually on the server to find out its workplace's paths. The system is outfitted with a camera so that it can be used to capture built-in live video of the attacker and display it on the server.Keywords: night patrolling, node MCU, server, security
Procedia PDF Downloads 159