Search results for: fixed live camera images
5087 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision
Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari
Abstract:
In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.Keywords: breakage, computer vision, husking, rice kernel
Procedia PDF Downloads 3815086 Digital Retinal Images: Background and Damaged Areas Segmentation
Authors: Eman A. Gani, Loay E. George, Faisel G. Mohammed, Kamal H. Sager
Abstract:
Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy.Keywords: retinal images, fundus images, diabetic retinopathy, background segmentation, damaged areas segmentation
Procedia PDF Downloads 4035085 The Analogy of Visual Arts and Visual Literacy
Authors: Lindelwa Pepu
Abstract:
Visual Arts and Visual Literacy are defined with distinction from one another. Visual Arts are known for art forms such as drawing, painting, and photography, just to name a few. At the same time, Visual Literacy is known for learning through images. The Visual Literacy phenomenon may be attributed to the use of images was first established for creating memories and enjoyment. As time evolved, images became the center and essential means of making contact between people. Gradually, images became a means for interpreting and understanding words through visuals, that being Visual Arts. The purpose of this study is to present the analogy of the two terms Visual Arts and Visual Literacy, which are defined and compared through early practicing visual artists as well as relevant researchers to reveal how they interrelate with one another. This is a qualitative study that uses an interpretive approach as it seeks to understand and explain the interest of the study. The results reveal correspondence of the analogy between the two terms through various writers of early and recent years. This study recommends the significance of the two terms and the role they play in relation to other fields of study.Keywords: visual arts, visual literacy, pictures, images
Procedia PDF Downloads 1665084 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 2765083 Examining Foreign Student Visual Perceptions of Online Marketing Tools at a Hungarian University
Authors: Anita Kéri
Abstract:
Higher education marketing has been a widely researched field in recent years. Due to the increasing competition among higher education institutions worldwide, it has become crucial to target foreign students with effective marketing tools. Online marketing tools became central to attracting, retaining, and satisfying the needs of foreign students. Therefore, the aim of the current study is to reveal how the online marketing tools of a Hungarian university are perceived visually by its first-year foreign students, with special emphasis on the university webpage content. Eye-camera tracking and retrospective think-aloud interviews were used to measure visual perceptions. Results show that freshmen students remember those online marketing content more that has familiar content on them. Pictures of real-life students and their experiences attract students’ attention more, and they also remember information on these webpage elements more, compared to designs with stock photos. This research is novel in the sense that it uses eye-camera tracking in the field of higher education marketing, thereby providing insight into the perception of online higher education marketing for foreign students.Keywords: higher education, marketing, eye-camera, visual perceptions
Procedia PDF Downloads 1005082 Analysis of Chatterjea Type F-Contraction in F-Metric Space and Application
Authors: Awais Asif
Abstract:
This article investigates fixed point theorems of Chatterjea type F-contraction in the setting of F-metric space. We relax the conditions of F-contraction and define modified F-contraction for two mappings. The study provides fixed point results for both single-valued and multivalued mappings. The results are further extended to common fixed point theorems for two mappings. Moreover, to discuss the applicability of our results, an application is provided, which shows the role of our results in finding the solution to functional equations in dynamic programming. Our results generalize and extend the existing results in the literature.Keywords: Chatterjea type F-contraction, F-cauchy sequence, F-convergent, multi valued mappings
Procedia PDF Downloads 1435081 Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain
Authors: W. S. Besbas, M. A. Artemi, R. M. Salman
Abstract:
Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images.Keywords: Content Based Image Retrieval (CBIR), face sketch image retrieval, features selection for CBIR, image retrieval in transform domain
Procedia PDF Downloads 4935080 Reinforcement Learning for Classification of Low-Resolution Satellite Images
Authors: Khadija Bouzaachane, El Mahdi El Guarmah
Abstract:
The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake.Keywords: classification, deep learning, reinforcement learning, satellite imagery
Procedia PDF Downloads 2135079 Urdu Text Extraction Method from Images
Authors: Samabia Tehsin, Sumaira Kausar
Abstract:
Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results.Keywords: caption text, content-based image retrieval, document analysis, text extraction
Procedia PDF Downloads 5165078 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 1215077 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi
Abstract:
The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point
Procedia PDF Downloads 1075076 MB-Slam: A Slam Framework for Construction Monitoring
Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han
Abstract:
Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.Keywords: perspective alignment, progress monitoring, slam, stereo matching.
Procedia PDF Downloads 2245075 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 2925074 Retrieval of Aerosol Optical Depth and Correlation Analysis of PM2.5 Based on GF-1 Wide Field of View Images
Authors: Bo Wang
Abstract:
This paper proposes a method that can estimate PM2.5 by the images of GF-1 Satellite that called WFOV images (Wide Field of View). AOD (Aerosol Optical Depth) over land surfaces was retrieved in Shanghai area based on DDV (Dark Dense Vegetation) method. PM2.5 information, gathered from ground monitoring stations hourly, was fitted with AOD using different polynomial coefficients, and then the correlation coefficient between them was calculated. The results showed that, the GF-1 WFOV images can meet the requirement of retrieving AOD, and the correlation coefficient between the retrieved AOD and PM2.5 was high. If more detailed and comprehensive data is provided, the accuracy could be improved and the parameters can be more precise in the future.Keywords: remote sensing retrieve, PM 2.5, GF-1, aerosol optical depth
Procedia PDF Downloads 2445073 Model Reference Adaptive Control and LQR Control for Quadrotor with Parametric Uncertainties
Authors: Alia Abdul Ghaffar, Tom Richardson
Abstract:
A model reference adaptive control and a fixed gain LQR control were implemented in the height controller of a quadrotor that has parametric uncertainties due to the act of picking up an object of unknown dimension and mass. It is shown that an adaptive control, unlike a fixed gain control, is capable of ensuring a stable tracking performance under such condition, although adaptive control suffers from several limitations. The combination of both adaptive and fixed gain control in the controller architecture results in an enhanced tracking performance in the presence of parametric uncertainties.Keywords: UAV, quadrotor, robotic arm augmentation, model reference adaptive control, LQR control
Procedia PDF Downloads 4725072 Automated Localization of Palpebral Conjunctiva and Hemoglobin Determination Using Smart Phone Camera
Authors: Faraz Tahir, M. Usman Akram, Albab Ahmad Khan, Mujahid Abbass, Ahmad Tariq, Nuzhat Qaiser
Abstract:
The objective of this study was to evaluate the Degree of anemia by taking the picture of the palpebral conjunctiva using Smartphone Camera. We have first localized the region of interest from the image and then extracted certain features from that Region of interest and trained SVM classifier on those features and then, as a result, our system classifies the image in real-time on their level of hemoglobin. The proposed system has given an accuracy of 70%. We have trained our classifier on a locally gathered dataset of 30 patients.Keywords: anemia, palpebral conjunctiva, SVM, smartphone
Procedia PDF Downloads 5055071 Research Approaches for Identifying Images of the Past in the Built Environment
Authors: Ahmad Al-Zoabi
Abstract:
Development of research approaches for identifying images of the past in the built environment is at a beginning stage, and a review of the current literature reveals a limited body of research in this area. This study seeks to make a contribution to fill this void. It investigates the theoretical and empirical studies that examine the built environment as a medium for communicating the past in order to understand how images of the past are operationalized in these studies. Findings revealed that image could be operationalized in several ways depending on the focus of the study. Three concerns were addressed in this study when defining the image of the past: (a) to investigate an 'everyday' popular image of the past; (b) to look at the building's image as an integrated part of a larger image for the city; and (c) to find patterns within residents' images of the past. This study concludes that a future study is needed to address the effects of different scales (size and depth of history) of cities and of different cultural backgrounds of images of the past.Keywords: architecture, built environment, image of the past, research approaches
Procedia PDF Downloads 3155070 Acceleration and Deceleration Behavior in the Vicinity of a Speed Camera, and Speed Section Control
Authors: Jean Felix Tuyisingize
Abstract:
Speeding or inappropriate speed is a major problem worldwide, contributing to 10-15% of road crashes and 30% of fatal injury crashes. The consequences of speeding put the driver's life at risk and the lives of other road users like motorists, cyclists, and pedestrians. To control vehicle speeds, governments, and traffic authorities enforced speed regulations through speed cameras and speed section control, which monitor all vehicle speeds and detect plate numbers to levy penalties. However, speed limit violations are prevalent, even on motorways with speed cameras. The problem with speed cameras is that they alter driver behaviors, and their effect declines with increasing distance from the speed camera location. Drivers decelerate short distances before the camera and vigorously accelerate above the speed limit just after passing by the camera. The sudden decelerating near cameras causes the drivers to try to make up for lost time after passing it, and they do this by speeding up, resulting in a phenomenon known as the "Kangaroo jump" or "V-profile" around camera/ASSC areas. This study investigated the impact of speed enforcement devices, specifically Average Speed Section Control (ASSCs) and fixed cameras, on acceleration and deceleration events within their vicinity. The research employed advanced statistical and Geographic Information System (GIS) analysis on naturalistic driving data, to uncover speeding patterns near the speed enforcement systems. The study revealed a notable concentration of events within a 600-meter radius of enforcement devices, suggesting their influence on driver behaviors within a specific range. However, most of these events are of low severity, suggesting that drivers may not significantly alter their speed upon encountering these devices. This behavior could be attributed to several reasons, such as consistently maintaining safe speeds or using real-time in-vehicle intervention systems. The complexity of driver behavior is also highlighted, indicating the potential influence of factors like traffic density, road conditions, weather, time of day, and driver characteristics. Further, the study highlighted that high-severity events often occurred outside speed enforcement zones, particularly around intersections, indicating these as potential hotspots for drastic speed changes. These findings call for a broader perspective on traffic safety interventions beyond reliance on speed enforcement devices. However, the study acknowledges certain limitations, such as its reliance on a specific geographical focus, which may impact the broad applicability of the findings. Additionally, the severity of speed modification events was categorized into low, medium, and high, which could oversimplify the continuum of speed changes and potentially mask trends within each category. This research contributes valuable insights to traffic safety and driver behavior literature, illuminating the complexity of driver behavior and the potential influence of factors beyond the presence of speed enforcement devices. Future research directions may employ various categories of event severity. They may also explore the role of in-vehicle technologies, driver characteristics, and a broader set of environmental variables in driving behavior and traffic safety.Keywords: acceleration, deceleration, speeding, inappropriate speed, speed enforcement cameras
Procedia PDF Downloads 325069 Sniff-Camera for Imaging of Ethanol Vapor in Human Body Gases after Drinking
Authors: Toshiyuki Sato, Kenta Iitani, Koji Toma, Takahiro Arakawa, Kohji Mitsubayashi
Abstract:
A 2-dimensional imaging system (Sniff-camera) for gaseous ethanol emissions from a human palm skin was constructed and demonstrated. This imaging system measures gaseous ethanol concentrations as intensities of chemiluminescence (CL) by luminol reaction induced by alcohol oxidase and luminol-hydrogen peroxide system. A conversion of ethanol distributions and concentrations to 2-dimensional CL was conducted on an enzyme-immobilized mesh substrate in a dark box, which contained a luminol solution. In order to visualize ethanol emissions from human palm skin, we developed highly sensitive and selective imaging system for transpired gaseous ethanol at sub ppm-levels. High sensitivity imaging allows us to successfully visualize the emissions dynamics of transdermal gaseous ethanol. The intensity of each pixel on the palm shows the reflection of ethanol concentrations distributions based on the metabolism of oral alcohol administration. This imaging system is significant and useful for the assessment of ethanol measurement of the palmar skin.Keywords: sniff-camera, gas-imaging, ethanol vapor, human body gas
Procedia PDF Downloads 3695068 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors
Procedia PDF Downloads 1585067 The Way Digitized Lectures and Film Presence Coaching Impact Academic Identity: An Expert Facilitated Participatory Action Research Case Study
Authors: Amanda Burrell, Tonia Gary, David Wright, Kumara Ward
Abstract:
This paper explores the concept of academic identity as it relates to the lecture, in particular, the digitized lecture delivered to a camera, in the absence of a student audience. Many academics have the performance aspect of the role thrust upon them with little or no training. For the purpose of this study, we look at the performance of the academic identity and examine tailored film presence coaching for its contributions toward academic identity, specifically in relation to feelings of self-confidence and diminishment of discomfort or stage fright. The case is articulated through the lens of scholar-practitioners, using expert facilitated participatory action research. It demonstrates in our sample of experienced academics, all reported some feelings of uncertainty about presenting lectures to camera prior to coaching. We share how power poses and reframing fear, produced improvements in the ease and competency of all participants. We share exactly how this insight could be adapted for self-coaching by any academic when called to present to a camera and consider the relationship between this and academic identity.Keywords: academic identity, digitized lecture, embodied learning, performance coaching
Procedia PDF Downloads 3375066 Common Fixed Point Results and Stability of a Modified Jungck Iterative Scheme
Authors: Hudson Akewe
Abstract:
In this study, we introduce a modified Jungck (Dual Jungck) iterative scheme and use the scheme to approximate the unique common fixed point of a pair of generalized contractive-like operators in a Banach space. The iterative scheme is also shown to be stable with respect to the maps (S,T). An example is taken to justify the convergence of the scheme. Our result is a generalization and improvement of several results in the literature on single map T.Keywords: generalized contractive-like operators, modified Jungck iterative scheme, stability results, weakly compatible maps, unique common fixed point
Procedia PDF Downloads 3485065 The Contemporary Visual Spectacle: Critical Visual Literacy
Authors: Lai-Fen Yang
Abstract:
In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle.Keywords: visual culture, contemporary, images, literacy
Procedia PDF Downloads 5135064 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms
Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li
Abstract:
High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.Keywords: monocular camera, GPS, positioning, measurement
Procedia PDF Downloads 1445063 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 675062 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 4785061 Numerical Calculation of Heat Transfer in Water Heater
Authors: Michal Spilacek, Martin Lisy, Marek Balas, Zdenek Skala
Abstract:
This article is trying to determine the status of flue gas that is entering the KWH heat exchanger from combustion chamber in order to calculate the heat transfer ratio of the heat exchanger. Combination of measurement, calculation, and computer simulation was used to create a useful way to approximate the heat transfer rate. The measurements were taken by a number of sensors that are mounted on the experimental device and by a thermal imaging camera. The results of the numerical calculation are in a good correspondence with the real power output of the experimental device. Results show that the research has a good direction and can be used to propose changes in the construction of the heat exchanger, but still needs enhancements.Keywords: heat exchanger, heat transfer rate, numerical calculation, thermal images
Procedia PDF Downloads 6165060 Validation of the Recovery of House Dust Mites from Fabrics by Means of Vacuum Sampling
Authors: A. Aljohani, D. Burke, D. Clarke, M. Gormally, M. Byrne, G. Fleming
Abstract:
Introduction: House Dust Mites (HDMs) are a source of allergen particles embedded in textiles and furnishings. Vacuum sampling is commonly used to recover and determine the abundance of HDMs but the efficiency of this method is less than standardized. Here, the efficiency of recovery of HDMs was evaluated from home-associated textiles using vacuum sampling protocols.Methods/Approach: Living Mites (LMs) or dead Mites (DMs) House Dust Mites (Dermatophagoides pteronyssinus: FERA, UK) were separately seeded onto the surfaces of Smooth Cotton, Denim and Fleece (25 mites/10x10cm2 squares) and left for 10 minutes before vacuuming. Fabrics were vacuumed (SKC Flite 2 pump) at a flow rate of 14 L/min for 60, 90 or 120 seconds and the number of mites retained by the filter (0.4μm x 37mm) unit was determined. Vacuuming was carried out in a linear direction (Protocol 1) or in a multidirectional pattern (Protocol 2). Additional fabrics with LMs were also frozen and then thawed, thereby euthanizing live mites (now termed EMs). Results/Findings: While there was significantly greater (p=0.000) recovery of mites (76% greater) in fabrics seeded with DMs than LMs irrespective of vacuuming protocol or fabric type, the efficiency of recovery of DMs (72%-76%) did not vary significantly between fabrics. For fabrics containing EMs, recovery was greatest for Smooth Cotton and Denim (65-73% recovered) and least for Fleece (15% recovered). There was no significant difference (p=0.99) between the recovery of mites across all three mite categories from Smooth Cotton and Denim but significantly fewer (p=0.000) mites were recovered from Fleece. Scanning Electron Microscopy images of HMD-seeded fabrics showed that live mites burrowed deeply into the Fleece weave which reduced their efficiency of recovery by vacuuming. Research Implications: Results presented here have implications for the recovery of HDMs by vacuuming and the choice of fabric to ameliorate HDM-dust sensitization.Keywords: allergy, asthma, dead, fabric, fleece, live mites, sampling
Procedia PDF Downloads 1395059 Using Satellite Images Datasets for Road Intersection Detection in Route Planning
Authors: Fatma El-Zahraa El-Taher, Ayman Taha, Jane Courtney, Susan Mckeever
Abstract:
Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions, is critical to decisions such as crossing roads or selecting the safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer the state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset is examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of the detection of intersections in satellite images is evaluated.Keywords: satellite images, remote sensing images, data acquisition, autonomous vehicles
Procedia PDF Downloads 1445058 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 33