Search results for: papyri images
1627 From Dissection to Diagnosis: Integrating Radiology into Anatomy Labs for Medical Students
Authors: Julia Wimmers-Klick
Abstract:
At the Canadian University of British Columbia's Faculty of Medicine, anatomy has traditionally been taught through a combination of lectures and dissection labs in the first two years, with radiology taught separately through lectures and online modules. However, this separation may leave students underprepared for medical practice, as medical imaging is essential for diagnosing anatomical and pathological conditions. To address this, a pilot project was initiated aimed at integrating radiological imaging into anatomy dissection labs from day one of medical school. The incorporated radiological images correlated with the current dissection areas. Additional stations were added within the lab, tailored to the specific content being covered. These stations focused on bones, and quiz questions, along with light-box exercises using radiographs, CT scans, and MRIs provided by the radiology department. The images used were free of pathologies. Examples of these will be presented in the poster. Feedback from short interviews with students and instructors has been positive, particularly among second-year students who appreciated the integration compared to their first-year experience. This low-budget approach was easy to implement but faced challenges, as lab instructors were not radiologists and occasionally struggled to answer students' questions. Instructors expressed a desire for basic training or a refresher course in radiology image reading, particularly focused on identifying healthy landmarks. Overall, all participants agreed that integrating radiology with anatomy reinforces learning during dissection, enhancing students' understanding and preparation for clinical practice.Keywords: quality improvement, radiology education, anatomy education, integration
Procedia PDF Downloads 151626 Mothering in Self- Defined Challenging Circumstances: A Photo-Elicitation Study of Motherhood and the Role of Social Media
Authors: Joanna Apps, Elena Markova
Abstract:
Concepts of the ideal mother and ideal mothering are disseminated through familial experiences, religious and cultural depictions of mothers and the national media. In recent years social media can also be added to the channels by which mothers and motherhood are socially constructed. However, the gulf between these depictions, -or in the case of social media ‘self-curations’ - of motherhood and lived experience has never been wider, particularly for women in disadvantaged or difficult circumstances. We report on a study of four lone mothers who were living with one or more of the following: limiting long term illness, large families, in temporary accommodation and on low incomes. The mothers were interviewed 3 times and invited to take a series of photos reflecting their lives in between each of the interviews. These photographs were used to ground the interviews in lived experience and as stimuli to discuss how the images within them compared to portrayals of mothers and motherhood that participants were exposed to on social media. The objectives of the study were to explore how mothers construct their identity in challenging and disadvantaged circumstances; to consider what their photographs of everyday life tell us about their experiences and understand the impact idealised images of motherhood have on real mothers in difficult circumstances. The results suggested that the mothers both strived to adhere to certain ideals of motherhood and acknowledged elements of these as partially or wholly impossible to achieve. The lack of depictions, in both national and social media, of motherhood that corresponded with their lived experience inhibited the mothers’ use of social media. Other themes included: lack of control, frustration and strain; and parental pride, love, humour, resilience, and hope.Keywords: motherhood, social media, photography, poverty
Procedia PDF Downloads 1601625 Digital Mapping of First-Order Drainages and Springs of the Guajiru River, Northeast of Brazil, Based on Satellite and Drone Images
Authors: Sebastião Milton Pinheiro da Silva, Michele Barbosa da Rocha, Ana Lúcia Fernandes Campos, Miquéias Rildo de Souza Silva
Abstract:
Water is an essential natural resource for life on Earth. Rivers, lakes, lagoons and dams are the main sources of water storage for human consumption. The costs of extracting and using these water sources are lower than those of exploiting groundwater on transition zones to semi-arid terrains. However, the volume of surface water has decreased over time, with the depletion of first-order drainage and the disappearance of springs, phenomena which are easily observed in the field. Climate change worsens water scarcity, compromising supply and hydric security for rural populations. To minimize the expected impacts, producing and storing water through watershed management planning requires detailed cartographic information on the relief and topography, and updated data on the stage and intensity of catchment basin environmental degradation problems. The cartography available of the Brazilian northeastern territory dates to the 70s, with topographic maps, printed, at a scale of 1:100,000 which does not meet the requirements to execute this project. Exceptionally, there are topographic maps at scales of 1:50,000 and 1:25,000 of some coastal regions in northeastern Brazil. Still, due to scale limitations and outdatedness, they are products of little utility for mapping low-order watersheds drainage and springs. Remote sensing data and geographic information systems can contribute to guiding the process of mapping and environmental recovery by integrating detailed relief and topographic data besides social and other environmental information in the Guajiru River Basin, located on the east coast of Rio Grande do Norte, on the Northeast region of Brazil. This study aimed to recognize and map catchment basin, springs and low-order drainage features along estimating morphometric parameters. Alos PALSAR and Copernicus DEM digital elevation models were evaluated and provided regional drainage features and the watersheds limits extracted with Terraview/Terrahidro 5.0 software. CBERS 4A satellite images with 2 m spatial resolution, processed with ESA SNAP Toolbox, allowed generating land use land cover map of Guajiru River. A Mappir Survey 3 multiespectral camera onboard of a DJI Phantom 4, a Mavic 2 Pro PPK Drone and an X91 GNSS receiver to collect the precised position of selected points were employed to detail mapping. Satellite images enabled a first knowledge approach of watershed areas on a more regional scale, yet very current, and drone images were essential in mapping details of catchment basins. The drone multispectral image mosaics, the digital elevation model, the contour lines and geomorphometric parameters were generated using OpenDroneMap/ODM and QGis softwares. The drone images generated facilitated the location, understanding and mapping of watersheds, recharge areas and first-order ephemeral watercourses on an adequate scale and will be used in the following project’s phases: watershed management planning, recovery and environmental protection of Rio's springs Guajiru. Environmental degradation is being analyzed from the perspective of the availability and quality of surface water supply.Keywords: imaging, relief, UAV, water
Procedia PDF Downloads 321624 Characterization of Optical Systems for Intraocular Projection
Authors: Charles Q. Yu, Victoria H. Fan, Ahmed F. Al-Qahtani, Ibraim Viera
Abstract:
Introduction: Over 12 million people are blind due to opacity of the cornea, the clear tissue forming the front of the eye. Current methods use plastic implants to produce a clear optical pathway into the eye but are limited by a high rate of complications. New implants utilizing completely inside-the-eye projection technology can overcome blindness due to scarring of the eye by producing images on the retina without need for a clear optical pathway into the eye and may be free of the complications of traditional treatments. However, the interior of the eye is a challenging location for the design of optical focusing systems which can produce a sufficiently high quality image. No optical focusing systems have previously been characterized for this purpose. Methods: 3 optical focusing systems for intraocular (inside the eye) projection were designed and then modeled with ray tracing software, including a pinhole system, a planoconvex, and an achromatic system. These were then constructed using off-the-shelf components and tested in the laboratory. Weight, size, magnification, depth of focus, image quality and brightness were characterized. Results: Image quality increased with complexity of system design, as did weight and size. A dual achromatic doublet optical system produced the highest image quality. The visual acuity equivalent achieved with this system was better than 20/200. Its weight was less than that of the natural human crystalline lens. Conclusions: We demonstrate for the first time that high quality images can be produced by optical systems sufficiently small and light to be implanted within the eye.Keywords: focusing, projection, blindness, cornea , achromatic, pinhole
Procedia PDF Downloads 1321623 Building Information Modelling (BIM) and Unmanned Aerial Vehicles (UAV) Technologies in Road Construction Project Monitoring and Management: Case Study of a Project in Cyprus
Authors: Yiannis Vacanas, Kyriacos Themistocleous, Athos Agapiou, Diofantos Hadjimitsis
Abstract:
Building Information Modelling (BIM) technology is considered by construction professionals as a very valuable process in modern design, procurement and project management. Construction professionals of all disciplines can use a single 3D model which BIM technology provides, to design a project accurately and furthermore monitor the progress of construction works effectively and efficiently. Unmanned Aerial Vehicles (UAVs), a technology initially developed for military applications, is now without any difficulty accessible and has already been used by commercial industries, including the construction industry. UAV technology has mainly been used for collection of images that allow visual monitoring of building and civil engineering projects conditions in various circumstances. UAVs, nevertheless, have undergone significant advances in equipment capabilities and now have the capacity to acquire high-resolution imagery from many angles in a cost effective manner, and by using photogrammetry methods, someone can determine characteristics such as distances, angles, areas, volumes and elevations of an area within overlapping images. In order to examine the potential of using a combination of BIM and UAV technologies in construction project management, this paper presents the results of a case study of a typical road construction project where the combined use of the two technologies was used in order to achieve efficient and accurate as-built data collection of the works progress, with outcomes such as volumes, and production of sections and 3D models, information necessary in project progress monitoring and efficient project management.Keywords: BIM, project management, project monitoring, UAV
Procedia PDF Downloads 3031622 Sustainable Packaging and Consumer Behavior in a Customer Experience: A Neuromarketing Perspective
Authors: Francesco Pinci
Abstract:
This study focuses on sustainability and consumer behavior in relation to packaging aesthetics. It investigates the significance of product packaging as a potent marketing tool with a specific emphasis on commercially available pasta as a case study. The research delves into the visual components of packaging, encompassing aspects such as color, shape, packaging material, and logo design. The findings of this study hold particular relevance for food and beverage companies as they seek to gain a comprehensive understanding of the factors influencing consumer purchasing decisions. Furthermore, the study places a significant emphasis on the sustainability aspects of packaging, exploring how eco-friendly and environmentally conscious packaging choices can impact consumer preferences and behaviors. The insights generated from this research contribute to a more sustainable approach to packaging practices and inform marketers on the effective integration of sustainability principles in their branding strategies. Overall, this study provides valuable insights into the dynamic interplay between aesthetics, sustainability, and consumer behavior, offering practical implications for businesses seeking to align their packaging practices with sustainable and consumer-centric approaches. In this study, packaging designs and images from the website of Eataly US.Eataly is one of the leading distributors of authentic Italian pasta worldwide, and its website serves as a rich source of packaging visuals and product representations. By analyzing the packaging and images showcased on the Eataly website, the study gained valuable insights into consumer behavior and preferences regarding pasta packaging in the context of sustainability and aesthetics.Keywords: consumer behaviour, sustainability, food marketing, neuromarketing
Procedia PDF Downloads 1151621 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms
Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan
Abstract:
Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving k-means clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.Keywords: acute leukaemia images, clustering algorithms, image segmentation, moving k-means
Procedia PDF Downloads 2921620 IoT-Based Early Identification of Guava (Psidium guajava) Leaves and Fruits Diseases
Authors: Daudi S. Simbeye, Mbazingwa E. Mkiramweni
Abstract:
Plant diseases have the potential to drastically diminish the quantity and quality of agricultural products. Guava (Psidium guajava), sometimes known as the apple of the tropics, is one of the most widely cultivated fruits in tropical regions. Monitoring plant health and diagnosing illnesses is an essential matter for sustainable agriculture, requiring the inspection of visually evident patterns on plant leaves and fruits. Due to minor variations in the symptoms of various guava illnesses, a professional opinion is required for disease diagnosis. Due to improper pesticide application by farmers, erroneous diagnoses may result in economic losses. This study proposes a method that uses artificial intelligence (AI) to detect and classify the most widespread guava plant by comparing images of its leaves and fruits to datasets. ESP32 CAM is responsible for data collection, which includes images of guava leaves and fruits. By comparing the datasets, these image formats are used as datasets to help in the diagnosis of plant diseases through the leaves and fruits, which is vital for the development of an effective automated agricultural system. The system test yielded the most accurate identification findings (99 percent accuracy in differentiating four guava fruit diseases (Canker, Mummification, Dot, and Rust) from healthy fruit). The proposed model has been interfaced with a mobile application to be used by smartphones to make a quick and responsible judgment, which can help the farmers instantly detect and prevent future production losses by enabling them to take precautions beforehand.Keywords: early identification, guava plants, fruit diseases, deep learning
Procedia PDF Downloads 791619 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1461618 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System
Authors: Iwan Cony Setiadi, Aulia M. T. Nasution
Abstract:
The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network
Procedia PDF Downloads 3231617 Contrast-to-Noise Ratio Comparison of Different Calcification Types in Dual Energy Breast Imaging
Authors: Vaia N. Koukou, Niki D. Martini, George P. Fountos, Christos M. Michail, Athanasios Bakas, Ioannis S. Kandarakis, George C. Nikiforidis
Abstract:
Various substitute materials of calcifications are used in phantom measurements and simulation studies in mammography. These include calcium carbonate, calcium oxalate, hydroxyapatite and aluminum. The aim of this study is to compare the contrast-to-noise ratio (CNR) values of the different calcification types using the dual energy method. The constructed calcification phantom consisted of three different calcification types and thicknesses: hydroxyapatite, calcite and calcium oxalate of 100, 200, 300 thicknesses. The breast tissue equivalent materials were polyethylene and polymethyl methacrylate slabs simulating adipose tissue and glandular tissue, respectively. The total thickness was 4.2 cm with 50% fixed glandularity. The low- (LE) and high-energy (HE) images were obtained from a tungsten anode using 40 kV filtered with 0.1 mm cadmium and 70 kV filtered with 1 mm copper, respectively. A high resolution complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) X-ray detector was used. The total mean glandular dose (MGD) and entrance surface dose (ESD) from the LE and HE images were constrained to typical levels (MGD=1.62 mGy and ESD=1.92 mGy). On average, the CNR of hydroxyapatite calcifications was 1.4 times that of calcite calcifications and 2.5 times that of calcium oxalate calcifications. The higher CNR values of hydroxyapatite are attributed to its attenuation properties compared to the other calcification materials, leading to higher contrast in the dual energy image. This work was supported by Grant Ε.040 from the Research Committee of the University of Patras (Programme K. Karatheodori).Keywords: calcification materials, CNR, dual energy, X-rays
Procedia PDF Downloads 3571616 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 3271615 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: hough forest, active shape model, segmentation, cardiac left ventricle
Procedia PDF Downloads 3411614 Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience
Authors: Kldiashvili Ekaterina, Burduli Archil, Ghortlishvili Gocha
Abstract:
Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations.Keywords: digital images, medical information system, second opinion consultations, electronic medical record
Procedia PDF Downloads 4501613 Environmental Monitoring by Using Unmanned Aerial Vehicle (UAV) Images and Spatial Data: A Case Study of Mineral Exploitation in Brazilian Federal District, Brazil
Authors: Maria De Albuquerque Bercot, Caio Gustavo Mesquita Angelo, Daniela Maria Moreira Siqueira, Augusto Assucena De Vasconcellos, Rodrigo Studart Correa
Abstract:
Mining is an important socioeconomic activity in Brazil although it negatively impacts the environment. Mineral operations cause irreversible changes in topography, removal of vegetation and topsoil, habitat destruction, displacement of fauna, loss of biodiversity, soil erosion, siltation of watercourses and have potential to enhance climate change. Due to the impacts and its pollution potential, mining activity in Brazil is legally subjected to environmental licensing. Unlicensed mining operations or operations that not abide to the terms of an obtained license are taken as environmental crimes in the country. This work reports a case analyzed in the Forensic Institute of the Brazilian Federal District Civil Police. The case consisted of detecting illegal aspects of sand exploitation from a licensed mine in Federal District, nearby Brasilia city. The fieldwork covered an area of roughly 6 ha, which was surveyed with an unmanned aerial vehicle (UAV) (PHANTOM 3 ADVANCED). The overflight with UAV took about 20 min, with maximum flight height of 100 m. 592 UAV georeferenced images were obtained and processed in a photogrammetric software (AGISOFT PHOTOSCAN 1.1.4), which generated a mosaic of geo-referenced images and a 3D model in less than six working hours. The 3D model was analyzed in a forensic software for accurate modeling and volumetric analysis. (MAPTEK I-SITE FORENSIC 2.2). To ensure the 3D model was a true representation of the mine site, coordinates of ten control points and reference measures were taken during fieldwork and compared to respective spatial data in the model. Finally, these spatial data were used for measuring mining area, excavation depth and volume of exploited sand. Results showed that mine holder had not complied with some terms and conditions stated in the granted license, such as sand exploration beyond authorized extension, depth and volume. Easiness, the accuracy and expedition of procedures used in this case highlight the employment of UAV imagery and computational photogrammetry as efficient tools for outdoor forensic exams, especially on environmental issues.Keywords: computational photogrammetry, environmental monitoring, mining, UAV
Procedia PDF Downloads 3191612 Topographic Coast Monitoring Using UAV Photogrammetry: A Case Study in Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, Jorge Enrique Baños-Illana, Arturo Gómez-Barrero, José Isaac Ramírez-Macías, Erik Omar Paredes-JuáRez, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga
Abstract:
Topographical changes in coastal areas are usually assessed with airborne LIDAR and conventional photogrammetry. In recent times Unmanned Aerial Vehicles (UAV) have been used several in photogrammetric applications including coastline evolution. However, its use goes further by using the points cloud associated to generate beach Digital Elevation Models (DEM). We present a methodology for monitoring coastal topographic changes along a 50 km coastline in Veracruz, Mexico using high-resolution images (less than 10 cm ground resolution) and dense points cloud captured with an UAV. This monitoring develops in the context of the port of Veracruz expansion project which construction began in 2015 and intends to characterize coast evolution and prevent and mitigate project impacts on coastal environments. The monitoring began with a historical coastline reconstruction since 1979 to 2015 using aerial photography and Landsat imagery. We could define some patterns: the northern part of the study area showed accretion while the southern part of the study area showed erosion. Since the study area is located off the port of Veracruz, a touristic and economical Mexican urban city, where coastal development structures have been built since 1979 in a continuous way, the local beaches of the touristic area are been refilled constantly. Those areas were not described as accretion since every month sand-filled trucks refill the sand beaches located in front of the hotel area. The construction of marinas and the comitial port of Veracruz, the old and the new expansion were made in the erosion part of the area. Northward from the City of Veracruz the beaches were described as accretion areas while southward from the city, the beaches were described as erosion areas. One of the problems is the expansion of the new development in the southern area of the city using the beach view as an incentive to buy front beach houses. We assessed coastal changes between seasons using high-resolution images and also points clouds during 2016 and preliminary results confirm that UAVs can be used in permanent coast monitoring programs with excellent performance and detail.Keywords: digital elevation model, high-resolution images, topographic coast monitoring, unmanned aerial vehicle
Procedia PDF Downloads 2701611 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 2281610 Magnetic Resonance Imaging for Assessment of the Quadriceps Tendon Cross-Sectional Area as an Adjunctive Diagnostic Parameter in Patients with Patellofemoral Pain Syndrome
Authors: Jae Ni Jang, SoYoon Park, Sukhee Park, Yumin Song, Jae Won Kim, Keum Nae Kang, Young Uk Kim
Abstract:
Objectives: Patellofemoral pain syndrome (PFPS) is a common clinical condition characterized by anterior knee pain. Here, we investigated the quadriceps tendon cross-sectional area (QTCSA) as a novel predictor for the diagnosis of PFPS. By examining the association between the QTCSA and PFPS, we aimed to provide a more valuable diagnostic parameter and more equivocal assessment of the diagnostic potential of PFPS by comparing the QTCSA with the quadriceps tendon thickness (QTT), a traditional measure of quadriceps tendon hypertrophy. Patients and Methods: This retrospective study included 30 patients with PFPS and 30 healthy participants who underwent knee magnetic resonance imaging. T1-weighted turbo spin echo transverse magnetic resonance images were obtained. The QTCSA was measured on the axial-angled phases of the images by drawing outlines, and the QTT was measured at the most hypertrophied quadriceps tendon. Results: The average QTT and QTCSA for patients with PFPS (6.33±0.80 mm and 155.77±36.60 mm², respectively) were significantly greater than those for healthy participants (5.77±0.36 mm and 111.90±24.10 mm2, respectively; both P<0.001). We used a receiver operating characteristic curve to confirm the sensitivities and specificities for both the QTT and QTCSA as predictors of PFPS. The optimal diagnostic cutoff value for QTT was 5.98 mm, with a sensitivity of 66.7%, a specificity of 70.0%, and an area under the curve of 0.75 (0.62–0.88). The optimal diagnostic cutoff value for QTCSA was 121.04 mm², with a sensitivity of 73.3%, a specificity of 70.0%, and an area under the curve of 0.83 (0.74–0.93). Conclusion: The QTCSA was found to be a more reliable diagnostic indicator for PFPS than QTT.Keywords: patellofemoral pain syndrome, quadriceps muscle, hypertrophy, magnetic resonance imaging
Procedia PDF Downloads 521609 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 1401608 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 2351607 Effect of Depth on Texture Features of Ultrasound Images
Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes
Abstract:
In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering
Procedia PDF Downloads 2981606 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1561605 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1321604 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation
Procedia PDF Downloads 3581603 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 781602 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 671601 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1341600 Aspects and Studies of Fractal Geometry in Automatic Breast Cancer Detection
Authors: Mrinal Kanti Bhowmik, Kakali Das Jr., Barin Kumar De, Debotosh Bhattacharjee
Abstract:
Breast cancer is the most common cancer and a leading cause of death for women in the 35 to 55 age group. Early detection of breast cancer can decrease the mortality rate of breast cancer. Mammography is considered as a ‘Gold Standard’ for breast cancer detection and a very popular modality, presently used for breast cancer screening and detection. The screening of digital mammograms often leads to over diagnosis and a consequence to unnecessary traumatic & painful biopsies. For that reason recent studies involving the use of thermal imaging as a screening technique have generated a growing interest especially in cases where the mammography is limited, as in young patients who have dense breast tissue. Tumor is a significant sign of breast cancer in both mammography and thermography. The tumors are complex in structure and they also exhibit a different statistical and textural features compared to the breast background tissue. Fractal geometry is a geometry which is used to describe this type of complex structure as per their main characteristic, where traditional Euclidean geometry fails. Over the last few years, fractal geometrics have been applied mostly in many medical image (1D, 2D, or 3D) analysis applications. In breast cancer detection using digital mammogram images, also it plays a significant role. Fractal is also used in thermography for early detection of the masses using the thermal texture. This paper presents an overview of the recent aspects and initiatives of fractals in breast cancer detection in both mammography and thermography. The scope of fractal geometry in automatic breast cancer detection using digital mammogram and thermogram images are analysed, which forms a foundation for further study on application of fractal geometry in medical imaging for improving the efficiency of automatic detection.Keywords: fractal, tumor, thermography, mammography
Procedia PDF Downloads 3891599 Evaluation of the Urban Landscape Structures and Dynamics of Hawassa City, Using Satellite Images and Spatial Metrics Approaches, Ethiopia
Authors: Berhanu Terfa, Nengcheng C.
Abstract:
The study deals with the analysis of urban expansion and land transformation of Hawass City using remote sensing data and landscape metrics during last three decades (1987–2017). Remote sensing data from Various multi-temporal satellite images viz., TM (1987), TM (1995), ETM+ (2005) and OLI (2017) were used to examine the urban expansion, growth types, and spatial isolation within the urban landscape to develop an understanding the trends of built-up growth in Hawassa City, Ethiopia. Landscape metrics and built-up density were employed to analyze the pattern, process and overall growth status. The area under investigation was divided into concentric circles with a consecutive circle of 1 km incremental radius from the central pixel (Central Business District) for analysis. The result exhibited that the built-up area had increased by 541.32% between 1987 and 2017and an extension growth types (more than 67 %) was observed. The major growth took place in north-west direction followed by north direction in haphazard manner during 1987–1995 period, whereas predominant built-up development was observed in south and southwest direction during 1995–2017 period. Land scape metrics result revealed that the of urban patches density, total edge and edge density increased, while mean nearest neighbors’ distance decreased showing the tendency of sprawl.Keywords: landscape metrics, spatial patterns, remote sensing, multi-temporal, urban sprawl
Procedia PDF Downloads 2861598 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 114