Search results for: RGB-D images
1823 Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification
Authors: Yousra Hadj Hassen, Walid Ayedi, Tarek Ouni, Mohamed Jallouli
Abstract:
To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID.Keywords: camera network, descriptor, model, multi-shot, person re-identification, selection
Procedia PDF Downloads 2781822 21st Century Teacher Image to Stakeholders of Teacher Education Institutions in the Philippines
Authors: Marilyn U. Balagtas, Maria Ruth M. Regalado, Carmelina E. Barrera, Ramer V. Oxiño, Rosarito T. Suatengco, Josephine E. Tondo
Abstract:
This study presents the perceptions of the students and teachers from kindergarten to tertiary level of the image of the 21st century teacher to provide basis in designing teacher development programs in Teacher Education Institutions (TEIs) in the Philippines. The highlights of the report are the personal, psychosocial, and professional images of the 21st century teacher in basic education and the teacher educators based on a survey done to 612 internal stakeholders of nine member institutions of the National Network of Normal Schools (3NS). Data were obtained through the use of a validated researcher-made instrument which allowed generation of both quantitative and qualitative descriptions of the teacher image. Through the use of descriptive statistics, the common images of the teacher were drawn, which were validated and enriched by the information drawn from the qualitative data. The study recommends a repertoire of teacher development programs to create the good image of the 21st century teachers for a better Philippines.Keywords: teacher image, 21st century teacher, teacher education, development program
Procedia PDF Downloads 3671821 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 1771820 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images
Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy
Abstract:
Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms
Procedia PDF Downloads 3801819 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: grayscale image format, image fusing, RGB image format, SURF detection, YCbCr image format
Procedia PDF Downloads 3771818 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method
Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image
Procedia PDF Downloads 3141817 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning
Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V
Abstract:
The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network
Procedia PDF Downloads 1421816 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 1091815 Vision Aided INS for Soft Landing
Authors: R. Sri Karthi Krishna, A. Saravana Kumar, Kesava Brahmaji, V. S. Vinoj
Abstract:
The lunar surface may contain rough and non-uniform terrain with dips and peaks. Soft-landing is a method of landing the lander on the lunar surface without any damage to the vehicle. This project focuses on finding a safe landing site for the vehicle by developing a method for the lateral velocity determination of the lunar lander. This is done by processing the real time images obtained by means of an on-board vision sensor. The hazard avoidance phase of the soft-landing starts when the vehicle is about 200 m above the lunar surface. Here, the lander has a very low velocity of about 10 cm/s:vertical and 5 m/s:horizontal. On the detection of a hazard the lander is navigated by controlling the vertical and lateral velocity. In order to find an appropriate landing site and to accordingly navigate, the lander image processing is performed continuously. The images are taken continuously until the landing site is determined, and the lander safely lands on the lunar surface. By integrating this vision-based navigation with the INS a better accuracy for the soft-landing of the lunar lander can be obtained.Keywords: vision aided INS, image processing, lateral velocity estimation, materials engineering
Procedia PDF Downloads 4661814 Assessing Prescribed Burn Severity in the Wetlands of the Paraná River -Argentina
Authors: Virginia Venturini, Elisabet Walker, Aylen Carrasco-Millan
Abstract:
Latin America stands at the front of climate change impacts, with forecasts projecting accelerated temperature and sea level rises compared to the global average. These changes are set to trigger a cascade of effects, including coastal retreat, intensified droughts in some nations, and heightened flood risks in others. In Argentina, wildfires historically affected forests, but since 2004, wetland fires have emerged as a pressing concern. By 2021, the wetlands of the Paraná River faced a dangerous situation. In fact, during the year 2021, a high-risk scenario was naturally formed in the wetlands of the Paraná River, in Argentina. Very low water levels in the rivers, and excessive standing dead plant material (fuel), triggered most of the fires recorded in the vast wetland region of the Paraná during 2020-2021. During 2008 fire events devastated nearly 15% of the Paraná Delta, and by late 2021 new fires burned more than 300,000 ha of these same wetlands. Therefore, the goal of this work is to explore remote sensing tools to monitor environmental conditions and the severity of prescribed burns in the Paraná River wetlands. Thus, two prescribed burning experiments were carried out in the study area (31°40’ 05’’ S, 60° 34’ 40’’ W) during September 2023. The first experiment was carried out on Sept. 13th, in a plot of 0.5 ha which dominant vegetation were Echinochloa sp., and Thalia, while the second trial was done on Sept 29th in a plot of 0.7 ha, next to the first burned parcel; here the dominant vegetation species were Echinochloa sp. and Solanum glaucophyllum. Field campaigns were conducted between September 8th and November 8th to assess the severity of the prescribed burns. Flight surveys were conducted utilizing a DJI® Inspire II drone equipped with a Sentera® NDVI camera. Then, burn severity was quantified by analyzing images captured by the Sentera camera along with data from the Sentinel 2 satellite mission. This involved subtracting the NDVI images obtained before and after the burn experiments. The results from both data sources demonstrate a highly heterogeneous impact of fire within the patch. Mean severity values obtained with drone NDVI images of the first experience were about 0.16 and 0.18 with Sentinel images. For the second experiment, mean values obtained with the drone were approximately 0.17 and 0.16 with Sentinel images. Thus, most of the pixels showed low fire severity and only a few pixels presented moderated burn severity, based on the wildfire scale. The undisturbed plots maintained consistent mean NDVI values throughout the experiments. Moreover, the severity assessment of each experiment revealed that the vegetation was not completely dry, despite experiencing extreme drought conditions.Keywords: prescribed-burn, severity, NDVI, wetlands
Procedia PDF Downloads 681813 Optimization of Solar Tracking Systems
Authors: A. Zaher, A. Traore, F. Thiéry, T. Talbert, B. Shaer
Abstract:
In this paper, an intelligent approach is proposed to optimize the orientation of continuous solar tracking systems on cloudy days. Considering the weather case, the direct sunlight is more important than the diffuse radiation in case of clear sky. Thus, the panel is always pointed towards the sun. In case of an overcast sky, the solar beam is close to zero, and the panel is placed horizontally to receive the maximum of diffuse radiation. Under partly covered conditions, the panel must be pointed towards the source that emits the maximum of solar energy and it may be anywhere in the sky dome. Thus, the idea of our approach is to analyze the images, captured by ground-based sky camera system, in order to detect the zone in the sky dome which is considered as the optimal source of energy under cloudy conditions. The proposed approach is implemented using experimental setup developed at PROMES-CNRS laboratory in Perpignan city (France). Under overcast conditions, the results were very satisfactory, and the intelligent approach has provided efficiency gains of up to 9% relative to conventional continuous sun tracking systems.Keywords: clouds detection, fuzzy inference systems, images processing, sun trackers
Procedia PDF Downloads 1921812 RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H
Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Mads Dyrmann, Robert Poulsen
Abstract:
For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.Keywords: weed mapping, integrated weed management, weed recognition, image acquisition
Procedia PDF Downloads 2331811 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method
Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito
Abstract:
In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.
Procedia PDF Downloads 4931810 The Conflict between Empowerment and Exploitation: The Hypersexualization of Women in the Media
Authors: Seung Won Park
Abstract:
Pornographic images are becoming increasingly normalized as innovations in media technology arise, the porn industry explosively grows, and transnational capitalism spreads due to government deregulation and privatization of media. As the media evolves, pornography has become more and more violent and non-consensual; this growth of ‘raunch culture’ reifies the traditional power balance between men and women in which men are dominant, and women are submissive. This male domination objectifies and commodifies women, reducing them to merely sexual objects for the gratification of men. Women are exposed to pornographic images at younger and younger ages, providing unhealthy sexual role models and teaching them lessons on sexual behavior before the onset of puberty. The increasingly sexualized depiction of women in particular positions them as appropriately desirable and available to men. As a result, women are not only viewed as sexual prey but also end up treating themselves primarily as sexual objects, basing their worth off of their sexuality alone. Although many scholars are aware of and have written on the great lack of agency exercised by women in these representations, the general public tends to view some of these women as being empowered, rather than exploited. Scholarly discourse is constrained by the popular misconception that the construction of women’s sexuality in the media is controlled by women themselves.Keywords: construction of gender, hypersexualization, media, objectification
Procedia PDF Downloads 2961809 Control of Microbial Pollution Using Biodegradable Polymer
Authors: Mahmoud H. Abu Elella, Riham R. Mohamed, Magdy W. Sabaa
Abstract:
Introduction: Microbial pollution is global problem threatening the human health. It is resulted by pathogenic microorganisms such as Escherichia coli (E. coli), Staphylococcus aureus (S. aureus) and other pathogenic strains. They cause a dangerous effect on human health, so great efforts have been exerted to produce new and effective antimicrobial agents. Nowadays, natural polysaccharides, such as chitosan and its derivatives are used as antimicrobial agents. The aim of our work is to synthesize of a biodegradable polymer such as N-quaternized chitosan (NQC) then Characterization of NQC by using different analysis techniques such as Fourier transform infrared (FTIR) and Scanning electron microscopy (SEM) and using it as an antibacterial agent against different pathogenic bacteria. Methods: Synthesis of NQC using dimethylsulphate. Results: FTIR technique exhibited absorption peaks of NQC, SEM images illustrated that surface of NQC was smooth and antibacterial results showed that NQC had a high antibacterial effect. Discussion: NQC was prepared and it was proved by FTIR technique and SEM images antibacterial results exhibited that NQC was an antibacterial agent.Keywords: antimicrobial agent, N-quaternized chitosan chloride, silver nanocomposites, sodium polyacrylate
Procedia PDF Downloads 2881808 Unreality of Real: Debordean Reading of Gillian Flynn's Gone Girl
Authors: Sahand Hamed Moeel Ardebil, Zohreh Taebi Noghondari, Mahmood Reza Ghorban Sabbagh
Abstract:
Gillian Flynn’s Gone Girl, depicts a society in which, as a result of media dominance, the reality is very precarious and difficult to grasp. In Gone Girl, reality and image of reality represented on TV, are challenging to differentiate. Along with reality, individuals’ agency and independence before media and the capitalist rule are called in to question in the novel. In order to expose the unstable nature of reality and an individual’s complicated relationship with media, this study has deployed the ideas of Marxist-media theorist Guy Debord (1931-1992). In his book Society of the Spectacle (1966), Debord delineates a society in which images replace the objective reality, and people are incapable of making real changes. The results of the current study show that despite their efforts, Nick and Amy, the two main characters of the novel, are no more than spectators with very little agency before the media. Moreover, following Debord’s argument about the replacement of reality with images, everyone and every institution in Gone Girl projects an image that does not necessarily embody the objective reality, a fact that makes it very hard to differentiate the real from unreal.Keywords: agency, Debord, Gone Girl, media studies, society of spectacle, reality
Procedia PDF Downloads 1221807 A Multimodal Measurement Approach Using Narratives and Eye Tracking to Investigate Visual Behaviour in Perceiving Naturalistic and Urban Environments
Authors: Khizar Z. Choudhrya, Richard Coles, Salman Qureshi, Robert Ashford, Salim Khan, Rabia R. Mir
Abstract:
Abstract: The majority of existing landscape research has been derived by conducting heuristic evaluations, without having empirical insight of real participant visual response. In this research, a modern multimodal measurement approach (using narratives and eye tracking) was applied to investigate visual behaviour in perceiving naturalistic and urban environments. This research is unique in exploring gaze behaviour on environmental images possessing different levels of saliency. Eye behaviour is predominantly attracted by salient locations. The concept of methodology of this research on naturalistic and urban environments is drawn from the approaches in market research. Borrowing methodologies from market research that examine visual responses and qualities provided a critical and hitherto unexplored approach. This research has been conducted by using mixed methodological quantitative and qualitative approaches. On the whole, the results of this research corroborated existing landscape research findings, but they also identified potential refinements. The research contributes both methodologically and empirically to human-environment interaction (HEI). This study focused on initial impressions of environmental images with the help of eye tracking. Taking under consideration the importance of the image, this study explored the factors that influence initial fixations in relation to expectations and preferences. In terms of key findings of this research it is noticed that each participant has his own unique navigation style while surfing through different elements of landscape images. This individual navigation style is given the name of ‘visual signature’. This study adds the necessary clarity that would complete the picture and bring an insight for future landscape researchers.Keywords: human-environment interaction (HEI), multimodal measurement, narratives, eye tracking
Procedia PDF Downloads 3391806 A User Interface for Easiest Way Image Encryption with Chaos
Authors: D. López-Mancilla, J. M. Roblero-Villa
Abstract:
Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.Keywords: image encryption, chaos, secure communications, user interface
Procedia PDF Downloads 4891805 Advertising Campaigns for a Sustainable Future: The Fight against Plastic Pollution in the Ocean
Authors: Mokhlisur Rahman
Abstract:
Ocean inhibits one of the most complex ecosystems on the planet that regulates the earth's climate and weather by providing us with compatible weather to live. Ocean provides food by extending various ways of lifestyles that are dependent on it, transportation by accommodating the world's biggest carriers, recreation by offering its beauty in many moods, and home to countless species. At the essence of receiving various forms of entertainment, consumers choose to be close to the ocean while performing many fun activities. Which, at some point, upsets the stomach of the ocean by threatening marine life and the environment. Consumers throw the waste into the ocean after using it. Most of them are plastics that float over the ocean and turn into thousands of micro pieces that are hard to observe with the naked eye but easily eaten by the sea species. Eventually, that conflicts with the natural consumption process of any living species, making them sick. This information is not known by most consumers who go to the sea or seashores occasionally to spend time, nor is it widely discussed, which creates an information gap among consumers. However, advertising is a powerful tool to educate people about ocean pollution. This abstract analyzes three major ocean-saving advertisement campaigns that use innovative and advanced technology to get maximum exposure. The study collects data from the selected campaigns' websites and retrieves all available content related to messages, videos, and images. First, the SeaLegacy campaign uses stunning images to create awareness among the people; they use social media content, videos, and other educational content. They create content and strategies to build an emotional connection among the consumers that encourage them to move on an action. All the messages in their campaign empower consumers by using powerful words. Second, Ocean Conservancy Campaign uses social media marketing, events, and educational content to protect the ocean from various pollutants, including plastics, climate change, and overfishing. They use powerful images and videos of marine life. Their mission is to create evidence-based solutions toward a healthy ocean. Their message includes the message regarding the local communities along with the sea species. Third, ocean clean-up is a campaign that applies strategies using innovative technologies to remove plastic waste from the ocean. They use social media, digital, and email marketing to reach people and raise awareness. They also use images and videos to evoke an emotional response to take action. These tree advertisements use realistic images, powerful words, and the presence of living species in the imagery presentation, which are eye-catching and can grow emotional connection among the consumers. Identifying the effectiveness of the messages these advertisements carry and their strategies highlights the knowledge gap of mass people between real pollution and its consequences, making the message more accessible to the mass of people. This study aims to provide insights into the effectiveness of ocean-saving advertisement campaigns and their impact on the public's awareness of ocean conservation. The findings from this study help shape future campaigns.Keywords: advertising-campaign, content-creation, images ocean-saving technology, videos
Procedia PDF Downloads 781804 Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object
Authors: Yadpiroon Onmek, Jean Triboulet, Sebastien Druon, Bruno Jouvencel
Abstract:
The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object.Keywords: 3D reconstruction, archaeology, fusion, stereo system, sonar system, underwater
Procedia PDF Downloads 2991803 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy
Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen
Abstract:
Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing
Procedia PDF Downloads 2691802 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 881801 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 2171800 High-Speed LIF-OH Imaging of H2-Air Turbulent Premixed Flames
Authors: Ahmed A. Al-Harbi
Abstract:
This paper presents a comparative study of effects of the repeated solid obstacles on the propagation of H2-Air premixed flames. Pressure, speed of the flame front as well as structure of reaction zones are studied for hydrogen. Two equivalence ratios are examined for different configurations of three baffle plates and two obstacles with a square cross-section having blockage ratios of either 0.24 or 0.5. Hydrogen fuel mixtures with two equivalence ratios of 0.7 and 0.8 are studied and this is limited by the excessive overpressures. The results show that the peak pressure and its rate of change can be increased by increasing the blockage ratio or by decreasing the space between successive baffles. As illustrated by the high speed images of LIF-OH, the degree of wrinkling and contortion in the flame front increase as the blockages increase. The images also show how the flame front relaminarises with increasing distances between obstacles, which accounts for the pressure decrease with increasing separation. It is also found that more than one obstacle is needed to achieve a turbulent flame structure with intense corrugations.Keywords: premixed propagating flames, flame-obstacle interaction, turbulent premixed flames, overpressure, transient flames
Procedia PDF Downloads 3771799 An Investigation of Customers’ Perception and Attitude towards Krung Thai Bank in Thailand
Authors: Phatthanan Chaiyabut
Abstract:
The purposes of this research were to identify the perception of customers towards Krung Thai Bank’s image and to understand the customer attitude towards Krung Thai Bank’s image in Bangkok, Thailand. This research utilized quantitative approach and used questionnaire as data collection tool. A sample size of 420 respondents was selected by simple random sampling. The findings revealed that the majority of respondents received information, news, and feeds concerning the bank through televisions the most. This information channel had significantly influenced on the customers and their decisions to utilize the bank’s products and services. From the information concerning the attitudes towards overall image of the bank, it was found that the majority respondents rated the bank’s image at the good level. The top three average attitudes included the bank’s images in supports government's monetary policies, being renowned and stable, and contributing in economical amendments and developments, with the mean average of 4.01, 3.96 and 3.81 respectively. The attitudes toward the images included a business leader in banking, marketing, and competitions. Offering prompt services, and provided appropriate servicing time were rated moderate with the attitudes of 3.36 and 3.30 respectively.Keywords: attitude, image, Krung Thai Bank, perception
Procedia PDF Downloads 4141798 Based on MR Spectroscopy, Metabolite Ratio Analysis of MRI Images for Metastatic Lesion
Authors: Hossain A, Hossain S.
Abstract:
Introduction: In a small cohort, we sought to assess the magnetic resonance spectroscopy's (MRS) ability to predict the presence of metastatic lesions. Method: A Popular Diagnostic Centre Limited enrolled patients with neuroepithelial tumors. The 1H CSI MRS of the brain allows us to detect changes in the concentration of specific metabolites caused by metastatic lesions. Among these metabolites are N-acetyl-aspartate (NNA), creatine (Cr), and choline (Cho). For Cho, NAA, Cr, and Cr₂, the metabolic ratio was calculated using the division method. Results: The NAA values were 0.63 and 5.65 for tumor cells, 1.86 and 5.66 for normal cells, and 1.86 and 5.66 for normal cells 2. NAA values for normal cells 1 were 1.84, 10.6, and 1.86 for normal cells 2, respectively. Cho levels were as low as 0.8 and 10.53 in the tumor cell, compared to 1.12 and 2.7 in the normal cell 1 and 1.24 and 6.36 in the normal cell 2. Cho/Cr₂ barely distinguished itself from the other ratios in terms of significance. For tumor cells, the ratios of Cho/NAA, Cho/Cr₂, NAA/Cho, and NAA/Cr₂ were significant. Normal cell 1 had significant Cho/NAA, Cho/Cr, NAA/Cho, and NAA/Cr ratios. Conclusion: The clinical result can be improved by using 1H-MRSI to guide the size of resection for metastatic lesions. Even though it is non-invasive and doesn't present any difficulties during the procedure, MRS has been shown to predict the detection of metastatic lesions.Keywords: metabolite ratio, MRI images, metastatic lesion, MR spectroscopy, N-acetyl-aspartate
Procedia PDF Downloads 961797 Filling the Gaps with Representation: Netflix’s Anne with an E as a Way to Reveal What the Text Hid
Authors: Arkadiusz Adam Gardaś
Abstract:
In his theory of gaps, Wolfgang Iser states that literary texts often lack direct messages. Instead of using straightforward descriptions, authors leave the gaps or blanks, i.e., the spaces within the text that come into existence only when readers fill them with their understanding and experiences. This paper’s aim is to present Iser’s literary theory in an intersectional way by comparing it to the idea of intersemiotic translation. To be more precise, the author uses the example of Netflix’s adaption of Lucy Maud Montgomery’s Anne of Green Gables as a form of rendering a book into a film in such a way that certain textual gaps are filled with film images. Intersemiotic translation is a rendition in which signs of one kind of media are translated into the signs of the other media. Film adaptions are the most common, but not the only, type of intersemiotic translation. In this case, the role of the translator is taken by a screenwriter. A screenwriter’s role can reach beyond the direct meaning presented by the author, and instead, it can delve into the source material (here – a novel) in a deeper way. When it happens, a screenwriter is able to spot the gaps in the text and fill them with images that can later be presented to the viewers. Anne with an E, the Netflix adaption of Montgomery’s novel, may be used as a highly meaningful example of such a rendition. It is due to the fact that the 2017 series was broadcasted more than a hundred years after the first edition of the novel was published. This means that what the author might not have been able to show in her text can now be presented in a more open way. The screenwriter decided to use this opportunity to represent certain groups in the film, i.e., racial and sexual minorities, and women. Nonetheless, the series does not alter the novel; in fact, it adds to it by filling the blanks with more direct images. In the paper, fragments of the first season of Anne with an E are analysed in comparison to its source, the novel by Montgomery. The main purpose of that is to show how intersemiotic translation connected with the Iser’s literary theory can enrich the understanding of works of art, culture, media, and literature.Keywords: intersemiotic translation, film, literary gaps, representation
Procedia PDF Downloads 3161796 A Pilot Study of Influences of Scan Speed on Image Quality for Digital Tomosynthesis
Authors: Li-Ting Huang, Yu-Hsiang Shen, Cing-Ciao Ke, Sheng-Pin Tseng, Fan-Pin Tseng, Yu-Ching Ni, Chia-Yu Lin
Abstract:
Chest radiography is the most common technique for the diagnosis and follow-up of pulmonary diseases. However, the lesions superimposed with normal structures are difficult to be detected in chest radiography. Chest tomosynthesis is a relatively new technique to obtain 3D section images from a set of low-dose projections acquired over a limited angular range. However, there are some limitations with chest tomosynthesis. Patients undergoing tomosynthesis have to be able to hold their breath firmly for 10 seconds. A digital tomosynthesis system with advanced reconstruction algorithm and high-stability motion mechanism was developed by our research group. The potential for the system to perform a bidirectional chest scan within 10 seconds is expected. The purpose of this study is to realize the influences of the scan speed on the image quality for our digital tomosynthesis system. The major factors that lead image blurring are the motion of the X-ray source and the patient. For the fore one, an experiment of imaging a chest phantom with three different scan speeds, which are 6 cm/s, 8 cm/s, and 15 cm/s, was proceeded to understand the scan speed influences on the image quality. For the rear factor, a normal SD (Sprague-Dawley) rat was imaged with it alive and sacrificed to assess the impact on the image quality due to breath motion. In both experiments, the profile of the ROIs (region of interest) and the CNRs (contrast-to-noise ratio) of the ROIs to the normal tissue of the reconstructed images was examined to realize the degradations of the qualities of the images. The preliminary results show that no obvious degradation of the image quality was observed with increasing scan speed, possibly due to the advanced designs for the hardware and software of the system. It implies that higher speed (15 cm/s) than that of the commercialized tomosynthesis system (12 cm/s) for the proposed system is achieved, and therefore a complete chest scan within 10 seconds is expected.Keywords: chest radiography, digital tomosynthesis, image quality, scan speed
Procedia PDF Downloads 3321795 On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption
Authors: Ajish Sreedharan
Abstract:
Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks.Keywords: advanced encryption standard (AES), on dynamic chaotic S-BOX, image encryption, security analysis, ShiftRow transformation
Procedia PDF Downloads 4371794 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 42