Search results for: GF-2 images
591 Impact of Silicon Surface Modification on the Catalytic Performance Towards CO₂ Conversion of Cu₂S/Si-Based Photocathodes
Authors: Karima Benfadel, Lamia Talbi, Sabiha Anas Boussaa, Afaf Brik, Assia Boukezzata, Yahia Ouadah, Samira Kaci
Abstract:
In order to prevent global warming, which is mainly caused by the increase in carbon dioxide levels in the atmosphere, it is interesting to produce renewable energy in the form of chemical energy by converting carbon dioxide into alternative fuels and other energy-dense products. Photoelectrochemical reduction of carbon dioxide to value-added products and fuels is a promising and current method. The objective of our study is to develop Cu₂S-based photoélectrodes, in which Cu₂S is used as a CO₂ photoelectrocatalyst deposited on nanostructured silicon substrates. Cu₂S thin layers were deposited using the chemical bath deposition (CBD) technique. Silicon nanowires and nanopyramids were obtained by alkaline etching. SEM and UV-visible spectroscopy was used to analyse the morphology and optical characteristics. By using a potentiostat station, we characterized the photoelectrochemical properties. We performed cyclic voltammetry in the presence and without CO₂ purging as well as linear voltammetry (LSV) in the dark and under white light irradiation. We perform chronoamperometry to study the stability of our photocathodes. The quality of the nanowires and nanopyramids was visible in the SEM images, and after Cu₂S deposition, we could see how the deposition was distributed over the textured surfaces. The inclusion of the Cu₂S layer applied on textured substrates significantly reduces the reflectance (R%). The catalytic performance towards CO₂ conversion of Cu₂S/Si-based photocathodes revealed that the texturing of the silicon surface with nanowires and pyramids has a better photoelectrochemical behavior than those without surface modifications.Keywords: CO₂ conversion, Cu₂S photocathode, silicone nanostructured, electrochemistry
Procedia PDF Downloads 78590 Preparation of Electrospun PLA/ENR Fibers
Authors: Jaqueline G. L. Cosme, Paulo H. S. Picciani, Regina C. R. Nunes
Abstract:
Electrospinning is a technique for the fabrication of nanoscale fibers. The general electrospinning system consists of a syringe filled with polymer solution, a syringe pump, a high voltage source and a grounded counter electrode. During electrospinning a volumetric flow is set by the syringe pump and an electric voltage is applied. This forms an electric potential between the needle and the counter electrode (collector plate), which results in the formation of a Taylor cone and the jet. The jet is moved towards the lower potential, the counter electrode, wherein the solvent of the polymer solution is evaporated and the polymer fiber is formed. On the way to the counter electrode, the fiber is accelerated by the electric field. The bending instabilities that occur form a helical loop movements of the jet, which result from the coulomb repulsion of the surface charge. Trough bending instabilities the jet is stretched, so that the fiber diameter decreases. In this study, a thermoplastic/elastomeric binary blend of non-vulcanized epoxidized natural rubber (ENR) and poly(latic acid) (PLA) was electrospun using polymer solutions consisting of varying proportions of PCL and NR. Specifically, 15% (w/v) PLA/ENR solutions were prepared in /chloroform at proportions of 5, 10, 25, and 50% (w/w). The morphological and thermal properties of the electrospun mats were investigated by scanning electron microscopy (SEM) and differential scanning calorimetry analysis. The SEM images demonstrated the production of micrometer- and sub-micrometer-sized fibers with no bead formation. The blend miscibility was evaluated by thermal analysis, which showed that blending did not improve the thermal stability of the systems.Keywords: epoxidized natural rubber, poly(latic acid), electrospinning, chemistry
Procedia PDF Downloads 410589 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe
Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran
Abstract:
The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.Keywords: access control, multimodal biometrics, pattern recognition, security safe
Procedia PDF Downloads 335588 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 206587 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem, Case Study in Sidoarjo, East Java
Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin
Abstract:
Indonesia as an archipelagic nation have very long coastline which have large potential marine resources, one of that is the mangrove ecosystems. Lapindo mudflow disaster in Sidoarjo, East Java requires mudflow flowed into the sea through the river Brantas and Porong. Mud material that transported by river flow is feared dangerous because they contain harmful substances such as heavy metals. This study aims to map the mangrove ecosystem seen from its density and knowing how big the impact of a disaster on the Lapindo mud to mangrove ecosystem and accompanied by efforts to address the mangrove ecosystem that maintained continuity. Mapping coastal mangrove conditions of Sidoarjo was done using remote sensing products that Landsat 7 ETM + images with dry months of recording time in 2002, 2006, 2009, and 2014. The density of mangrove detected using NDVI that uses the band 3 that is the red channel and band 4 that is near IR channel. Image processing was used to produce NDVI using ENVI 5.1 software. NDVI results were used for the detection of mangrove density is 0-1. The development of mangrove ecosystems of both area and density from year to year experienced has a significant increase. Mangrove ecosystems growths are affected by material deposition area of Lapindo mud on Porong and Brantas river estuary, where the silt is growing medium suitable mangrove ecosystem and increasingly growing. Increasing the density caused support by public awareness to prevent heavy metals in the material so that the Lapindo mud mangrove breeding done around the farm.Keywords: archipelagic nation, mangrove, Lapindo mudflow disaster, NDVI
Procedia PDF Downloads 438586 Evaluating the Impact of Expansion on Urban Thermal Surroundings: A Case Study of Lahore Metropolitan City, Pakistan
Authors: Usman Ahmed Khan
Abstract:
Urbanization directly affects the existing infrastructure, landscape modification, environmental contamination, and traffic pollution, especially if there is a lack of urban planning. Recently, the rapid urban sprawl has resulted in less developed green areas and has devastating environmental consequences. This study was aimed to study the past urban expansion rates and measure LST from satellite data. The land use land cover (LULC) maps of years 1996, 2010, 2013, and 2017 were generated using landsat satellite images. Four main classes, i.e., water, urban, bare land, and vegetation, were identified using unsupervised classification with iterative self-organizing data analysis (isodata) technique. The LST from satellite thermal data can be derived from different procedures: atmospheric, radiometric calibrations and surface emissivity corrections, classification of spatial changeability in land-cover. Different methods and formulas were used in the algorithm that successfully retrieves the land surface temperature to help us study the thermal environment of the ground surface. To verify the algorithm, the land surface temperature and the near-air temperature were compared. The results showed that, From 1996-2017, urban areas increased to about a considerable increase of about 48%. Few areas of the city also shown in a reduction in LST from the year 1996-2017 that actually began their transitional phase from rural to urban LULC. The mean temperature of the city increased averagely about 1ºC each year in the month of October. The green and vegetative areas witnessed a decrease in the area while a higher number of pixels increased in urban class.Keywords: LST, LULC, isodata, urbanization
Procedia PDF Downloads 100585 Visual Intelligence: Perception, Image and Manipulation in Visual Communication
Authors: Poojitha Vemula
Abstract:
Understanding how we use image manipulation to communicate through an audience’s perceptions and conceive visual intelligence. With the use of many software and high-end skills, designers have developed a third eye to combine two different visuals and create the desired image by using photoshop and other software skills. The purpose of visual intelligence is to convey a message to the targeted audience. For instance, the images of models are retouched on their skin to make it more convincing and draw attention from the audience. There are many ways of manipulating an image, such as double exposure, retouching photography inks or paint airbrushing and piecing photos together, or enhancing the brightness and contrast. To understand visual intelligence, a questionnaire survey as well as research was conducted on how image manipulation is used by both the audience and the designers. This depends on the message that needs to be conveyed by the brands. For instance, Fair & Lovely, a brightening cream for ladies use a lot of retouching and effects to show the dramatic change the cream takes effect on dark or dusky faces. Thus the designer’s role is to use their third eye to incorporate the message into visuals. The research and questionnaire survey concludes the perceptions and manipulations used in visual communication. However this is all to make an effortless communication between the designer and the audience by using the skills of the designer and the features provided by the software. The objective of visual intelligence is to covet the message of the brands that advertise their products or services by using visuals through softwares. Conveying a message through visual intelligence requires an audiences perceptions and understanding from the visuals created by the artists or designers. Visual intelligence determines how we use our technical skills to retouch and manipulate an image for a better understanding to convey the message to the targeted audience. This also bridges the communication between the brand and the audience.Keywords: graphic design, visual communication, convey messages, photoshop, image manipulation
Procedia PDF Downloads 218584 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.Keywords: climate change, ETP, MODIS, GIEC scenarios
Procedia PDF Downloads 100583 On the Qarat Kibrit Salt Dome Faulting System South of Adam, Oman: In Search of Uranium Anomalies
Authors: Alaeddin Ebrahimi, Narasimman Sundararajan, Bernhard Pracejus
Abstract:
Development of salt domes, often a rising from depths of some 10 km or more, causes an intense faulting of the surrounding host rocks (salt tectonics). The fractured rocks then present ideal space for oil that can migrate and get trapped. If such moving of hydrocarbons passes uranium-carrying rock units (e.g., shales), uranium is collected and enriched by organic carbon compounds. Brines from the salt body are also ideal carriers for oxidized uranium species and will further dislocate uranium when in contact with uranium-enriched oils. Uranium then has the potential to mineralize in the vicinity of the dome (blue halite is evidence for radiation having affected salt deposits elsewhere in the world). Based on this knowledge, the Qarat Kibrit salt dome was investigated by a well-established geophysical method like very low frequency electromagnetic (VLF-EM) along five traverses approximately 250 m in length (10 m intervals) in order to identify subsurface fault systems. In-phase and quadrature components of the VLF-EM signal were recorded at two different transmitter frequencies (24.0 and 24.9 kHz). The images of Fraser filtered response of the in-phase components indicate a conductive zone (fault) in the southeast and southwest of the study area. The Karous-Hjelt current density pseudo section delineates subsurface faults at depths between 10 and 40 m. The stacked profiles of the Fraser filtered responses brought out two plausible trends/directions of faults. However, there seems to be no evidence for uranium enrichment has been recorded in this area.Keywords: salt dome, uranium, fault, in-phase component, quadrature component, Fraser filter, Karous-Hjelt current density
Procedia PDF Downloads 240582 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor
Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh
Abstract:
Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging
Procedia PDF Downloads 260581 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot
Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan
Abstract:
Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.Keywords: ADAS, home zone parking pilot, object detection, visual SLAM
Procedia PDF Downloads 67580 Role of Chloride Ions on The Properties of Electrodeposited ZnO Nanostructures
Authors: L. Mentar, O. Baka, M. R. Khelladi, A. Azizi
Abstract:
Zinc oxide (ZnO), as a transparent semiconductor with a wide band gap of 3.4 eV and a large exciton binding energy of 60 meV at room temperature, is one of the most promising materials for a wide range of modern applications. With the development of film growth technologies and intense recent interest in nanotechnology, several varieties of ZnO nanostructured materials have been synthesized almost exclusively by thermal evaporation methods, particularly chemical vapor deposition (CVD), which generally require a high growth temperature above 550 °C. In contrast, wet chemistry techniques such as hydrothermal synthesis and electro-deposition are promising alternatives to synthesize ZnO nanostructures, especially at a significantly lower temperature (below 200°C). In this study, the electro-deposition method was used to produce zinc oxide (ZnO) nanostructures on fluorine-doped tin oxide (FTO)-coated conducting glass substrate from chloride bath. We present the influence of KCl concentrations on the electro-deposition process, morphological, structural and optical properties of ZnO nanostructures. The potentials of electro-deposition of ZnO were determined using the cyclic voltammetry. From the Mott-Schottky measurements, the flat-band potential and the donor density for the ZnO nanostructure are determined. Field emission scanning electron microscopy (FESEM) images showed different sizes and morphologies of the nanostructures which depends on the concentrations of Cl-. Very netted hexagonal grains are observed for the nanostructures deposited at 0.1M of KCl. X-ray diffraction (XRD) study confirms the Wurtzite phase of the ZnO nanostructures with a preferred oriented along (002) plane normal to the substrate surface. UV-Visible spectra showed a significant optical transmission (~80%), which decreased with low Cl-1 concentrations. The energy band gap values have been estimated to be between 3.52 and 3.80 eV.Keywords: Cl-, electro-deposition, FESEM, Mott-Schottky, XRD, ZnO
Procedia PDF Downloads 289579 Effect of Threshold Configuration on Accuracy in Upper Airway Analysis Using Cone Beam Computed Tomography
Authors: Saba Fahham, Supak Ngamsom, Suchaya Damrongsri
Abstract:
Objective: The objective is to determine the optimal threshold of Romexis software for the airway volume and minimum cross-section area (MCA) analysis using Image J as a gold standard. Materials and Methods: A total of ten cone-beam computed tomography (CBCT) images were collected. The airway volume and MCA of each patient were analyzed using the automatic airway segmentation function in the CBCT DICOM viewer (Romexis). Airway volume and MCA measurements were conducted on each CBCT sagittal view with fifteen different threshold values from the Romexis software, Ranging from 300 to 1000. Duplicate DICOM files, in axial view, were imported into Image J for concurrent airway volume and MCA analysis as the gold standard. The airway volume and MCA measured from Romexis and Image J were compared using a t-test with Bonferroni correction, and statistical significance was set at p<0.003. Results: Concerning airway volume, thresholds of 600 to 850 as well as 1000, exhibited results that were not significantly distinct from those obtained through Image J. Regarding MCA, employing thresholds from 400 to 850 within Romexis Viewer showed no variance from Image J. Notably, within the threshold range of 600 to 850, there were no statistically significant differences observed in both airway volume and MCA analyses, in comparison to Image J. Conclusion: This study demonstrated that the utilization of Planmeca Romexis Viewer 6.4.3.3 within threshold range of 600 to 850 yields airway volume and MCA measurements that exhibit no statistically significant variance in comparison to measurements obtained through Image J. This outcome holds implications for diagnosing upper airway obstructions and post-orthodontic surgical monitoring.Keywords: airway analysis, airway segmentation, cone beam computed tomography, threshold
Procedia PDF Downloads 44578 Band Characterization and Development of Hyperspectral Indices for Retrieving Chlorophyll Content
Authors: Ramandeep Kaur M. Malhi, Prashant K. Srivastava, G.Sandhya Kiran
Abstract:
Quantitative estimates of foliar biochemicals, namely chlorophyll content (CC), serve as key information for the assessment of plant productivity, stress, and the availability of nutrients. This also plays a critical role in predicting the dynamic response of any vegetation to altering climate conditions. The advent of hyperspectral data with an enhanced number of available wavelengths has increased the possibility of acquiring improved information on CC. Retrieval of CC is extensively carried through well known spectral indices derived from hyperspectral data. In the present study, an attempt is made to develop hyperspectral indices by identifying optimum bands for CC estimation in Butea monosperma (Lam.) Taub growing in forests of Shoolpaneshwar Wildlife Sanctuary, Narmada district, Gujarat State, India. 196 narrow bands of EO-1 Hyperion images were screened, and the best optimum wavelength from blue, green, red, and near infrared (NIR) regions were identified based on the coefficient of determination (R²) between band reflectance and laboratory estimated CC. The identified optimum wavelengths were then employed for developing 12 hyperspectral indices. These spectral index values and CC values were then correlated to investigate the relation between laboratory measured CC and spectral indices. Band 15 of blue range and Band 22 of green range, Band 40 of the red region, and Band 79 of NIR region were found to be optimum bands for estimating CC. The optimum band based combinations on hyperspectral data proved to be the most effective indices for quantifying Butea CC with NDVI and TVI identified as the best (R² > 0.7, p < 0.01). The study demonstrated the significance of band characterization in the development of the best hyperspectral indices for the chlorophyll estimation, which can aid in monitoring the vitality of forests.Keywords: band, characterization, chlorophyll, hyperspectral, indices
Procedia PDF Downloads 153577 Exploring Selected Nigerian Fictional Work and Films as Sources of Peace Building and Conflict Resolution in the Natural Resource Extraction Regions of Nigeria: A Social Conflict Theoretical Perspective and Analysis
Authors: Joyce Onoromhenre Agofure
Abstract:
Research has shown how fictional work and films reflect the destruction of the environment due to the exploitation of oil, gas, gold, and forest products by multinational companies for profits but overlook discussions on conflict resolution and peacebuilding. However, this paper examines the manner art forms project peace and conflict resolution, thereby contributing to mediation and stability geared towards changing appalling situations in the resource extraction regions of Nigeria. This paper draws from selected Nigerian films- Blood and Oil (2019), directed by Curtis Graham, Black November (2012), directed by Jeta Amata, and a novel- Death of Eternity (2007), by Adamu Kyuka Usman. The study seeks to show that the disruptions caused in the natural resource regions of Nigeria have not only left adverse effects on the social well-being of the people but require resolutions through means of peacebuilding. By adopting the theoretical insights of Social Conflict, this paper focuses on artistic processes that enhance peacebuilding and conflict resolution in non-violent ways by using scenes, visual effects, themes, and images that can educate by shaping opinions, influencing attitudes, and changing ideas and behavioral patterns of individuals and communities. Put together; the research will open up critical perceptions brought about by the artists of study to shed light on the dire need to sustain peace and actively participate in conflict resolution in natural resource extraction spaces.Keywords: natural resource, extraction, conflict resolution, peace building
Procedia PDF Downloads 80576 Role of Imaging in Predicting the Receptor Positivity Status in Lung Adenocarcinoma: A Chapter in Radiogenomics
Authors: Sonal Sethi, Mukesh Yadav, Abhimanyu Gupta
Abstract:
The upcoming field of radiogenomics has the potential to upgrade the role of imaging in lung cancer management by noninvasive characterization of tumor histology and genetic microenvironment. Receptor positivity like epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) genotyping are critical in lung adenocarcinoma for treatment. As conventional identification of receptor positivity is an invasive procedure, we analyzed the features on non-invasive computed tomography (CT), which predicts the receptor positivity in lung adenocarcinoma. Retrospectively, we did a comprehensive study from 77 proven lung adenocarcinoma patients with CT images, EGFR and ALK receptor genotyping, and clinical information. Total 22/77 patients were receptor-positive (15 had only EGFR mutation, 6 had ALK mutation, and 1 had both EGFR and ALK mutation). Various morphological characteristics and metastatic distribution on CT were analyzed along with the clinical information. Univariate and multivariable logistic regression analyses were used. On multivariable logistic regression analysis, we found spiculated margin, lymphangitic spread, air bronchogram, pleural effusion, and distant metastasis had a significant predictive value for receptor mutation status. On univariate analysis, air bronchogram and pleural effusion had significant individual predictive value. Conclusions: Receptor positive lung cancer has characteristic imaging features compared with nonreceptor positive lung adenocarcinoma. Since CT is routinely used in lung cancer diagnosis, we can predict the receptor positivity by a noninvasive technique and would follow a more aggressive algorithm for evaluation of distant metastases as well as for the treatment.Keywords: lung cancer, multidisciplinary cancer care, oncologic imaging, radiobiology
Procedia PDF Downloads 136575 In Situ Volume Imaging of Cleared Mice Seminiferous Tubules Opens New Window to Study Spermatogenic Process in 3D
Authors: Lukas Ded
Abstract:
Studying the tissue structure and histogenesis in the natural, 3D context is challenging but highly beneficial process. Contrary to classical approach of the physical tissue sectioning and subsequent imaging, it enables to study the relationships of individual cellular and histological structures in their native context. Recent developments in the tissue clearing approaches and microscopic volume imaging/data processing enable the application of these methods also in the areas of developmental and reproductive biology. Here, using the CLARITY tissue procedure and 3D confocal volume imaging we optimized the protocol for clearing, staining and imaging of the mice seminiferous tubules isolated from the testes without cardiac perfusion procedure. Our approach enables the high magnification and fine resolution axial imaging of the whole diameter of the seminiferous tubules with possible unlimited lateral length imaging. Hence, the large continuous pieces of the seminiferous tubule can be scanned and digitally reconstructed for the study of the single tubule seminiferous stages using nuclear dyes. Furthermore, the application of the antibodies and various molecular dyes can be used for molecular labeling of individual cellular and subcellular structures and resulting 3D images can highly increase our understanding of the spatiotemporal aspects of the seminiferous tubules development and sperm ultrastructure formation. Finally, our newly developed algorithms for 3D data processing enable the massive parallel processing of the large amount of individual cell and tissue fluorescent signatures and building the robust spermatogenic models under physiological and pathological conditions.Keywords: CLARITY, spermatogenesis, testis, tissue clearing, volume imaging
Procedia PDF Downloads 136574 Holographic Visualisation of 3D Point Clouds in Real-time Measurements: A Proof of Concept Study
Authors: Henrique Fernandes, Sofia Catalucci, Richard Leach, Kapil Sugand
Abstract:
Background: Holograms are 3D images formed by the interference of light beams from a laser or other coherent light source. Pepper’s ghost is a form of hologram conceptualised in the 18th century. This Holographic visualisation with metrology measuring techniques by displaying measurements taken in real-time in holographic form can assist in research and education. New structural designs such as the Plexiglass Stand and the Hologram Box can optimise the holographic experience. Method: The equipment used included: (i) Zeiss’s ATOS Core 300 optical coordinate measuring instrument that scanned real-world objects; (ii) Cloud Compare, open-source software used for point cloud processing; and (iii) Hologram Box, designed and manufactured during this research to provide the blackout environment needed to display 3D point clouds in real-time measurements in holographic format, in addition to a portability aspect to holograms. The equipment was tailored to realise the goal of displaying measurements in an innovative technique and to improve on conventional methods. Three test scans were completed before doing a holographic conversion. Results: The outcome was a precise recreation of the original object in the holographic form presented with dense point clouds and surface density features in a colour map. Conclusion: This work establishes a way to visualise data in a point cloud system. To our understanding, this is a work that has never been attempted. This achievement provides an advancement in holographic visualisation. The Hologram Box could be used as a feedback tool for measurement quality control and verification in future smart factories.Keywords: holography, 3D scans, hologram box, metrology, point cloud
Procedia PDF Downloads 89573 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Authors: F. M. Pisano, M. Ciminello
Abstract:
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics
Procedia PDF Downloads 124572 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features
Authors: Bo Wang
Abstract:
The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection
Procedia PDF Downloads 284571 Condition Assessment of Reinforced Concrete Bridge Deck Using Ground Penetrating Radar
Authors: Azin Shakibabarough, Mojtaba Valinejadshoubi, Ashutosh Bagchi
Abstract:
Catastrophic bridge failure happens due to the lack of inspection, lack of design and extreme events like flooding, an earthquake. Bridge Management System (BMS) is utilized to diminish such an accident with proper design and frequent inspection. Visual inspection cannot detect any subsurface defects, so using Non-Destructive Evaluation (NDE) techniques remove these barriers as far as possible. Among all NDE techniques, Ground Penetrating Radar (GPR) has been proved as a highly effective device for detecting internal defects in a reinforced concrete bridge deck. GPR is used for detecting rebar location and rebar corrosion in the reinforced concrete deck. GPR profile is composed of hyperbola series in which sound hyperbola denotes sound rebar and blur hyperbola or signal attenuation shows corroded rebar. Interpretation of GPR images is implemented by numerical analysis or visualization. Researchers recently found that interpretation through visualization is more precise than interpretation through numerical analysis, but visualization is time-consuming and a highly subjective process. Automating the interpretation of GPR image through visualization can solve these problems. After interpretation of all scans of a bridge, condition assessment is conducted based on the generated corrosion map. However, this such a condition assessment is not objective and precise. Condition assessment based on structural integrity and strength parameters can make it more objective and precise. The main purpose of this study is to present an automated interpretation method of a reinforced concrete bridge deck through a visualization technique. In the end, the combined analysis of the structural condition in a bridge is implemented.Keywords: bridge condition assessment, ground penetrating radar, GPR, NDE techniques, visualization
Procedia PDF Downloads 148570 The Effect of Nano-Silver Packaging on Quality Maintenance of Fresh Strawberry
Authors: Naser Valipour Motlagh, Majid Aliabadi, Elnaz Rahmani, Samira Ghorbanpour
Abstract:
Strawberry is one of the most favored fruits all along the world. But due to its vulnerability to microbial contamination and short life storage, there are lots of problems in industrial production and transportation of this fruit. Therefore, lots of ideas have tried to increase the storage life of strawberries especially through proper packaging. This paper works on efficient packaging as well. The primary material used is produced through simple mixing of low-density polyethylene (LDPE) and silver nanoparticles in different weight fractions of 0.5 and 1% in presence of dicumyl peroxide as a cross-linking agent. Final packages were made in a twin-screw extruder. Then, their effect on the quality maintenance of strawberry is evaluated. The SEM images of nano-silver packages show the distribution of silver nanoparticles in the packages. Total bacteria count, mold, yeast and E. coli are measured for microbial evaluation of all samples. Texture, color, appearance, odor, taste and total acceptance of various samples are evaluated by trained panelists and based on 9-point hedonic scale method. The results show a decrease in total bacteria count and mold in nano-silver packages compared to the samples packed in polyethylene packages for the same storage time. The optimum concentration of silver nanoparticles for the lowest bacteria count and mold is predicted to be around 0.5% which has attained the most acceptance from the panelist as well. Moreover, organoleptic properties of strawberry are preserved for a longer period in nano-silver packages. It can be concluded that using nano-silver particles in strawberry packages has improved the storage life and quality maintenance of the fruit.Keywords: antimicrobial properties, polyethylene, silver nanoparticles, strawberry
Procedia PDF Downloads 155569 Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns
Authors: Dongqi Li, Yunjia Huang, Tomo Inoue, Kohei Inoue
Abstract:
With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture.Keywords: historic districts, color planning, semantic segmentation, natural language processing
Procedia PDF Downloads 88568 The Hydrotrope-Mediated, Low-Temperature, Aqueous Dissolution of Maize Starch
Authors: Jeroen Vinkx, Jan A. Delcour, Bart Goderis
Abstract:
Complete aqueous dissolution of starch is notoriously difficult. A high-temperature autoclaving process is necessary, followed by cooling the solution below its boiling point. The cooled solution is inherently unstable over time. Gelation and retrogradation processes, along with aggregation-induced by undissolved starch remnants, result in starch precipitation. We recently observed the spontaneous gelatinization of native maize starch (MS) in aqueous sodium salicylate (NaSal) solutions at room temperature. A hydrotropic mode of solubilization is hypothesized. Differential scanning calorimetry (DSC) and polarized optical microscopy (POM) of starch dispersions in NaSal solution were used to demonstrate the room temperature gelatinization of MS at different concentrations of MS and NaSal. The DSC gelatinization peak shifts to lower temperatures, and the gelatinization enthalpy decreases with increasing NaSal concentration. POM images confirm the same trend through the disappearance of the ‘Maltese cross’ interference pattern of starch granules. The minimal NaSal concentration to induce complete room temperature dissolution of MS was found to be around 15-20 wt%. The MS content of the dispersion has little influence on the amount of NaSal needed to dissolve it. The effect of the NaSal solution on the MS molecular weight was checked with HPSEC. It is speculated that, because of its amphiphilic character, NaSal enhances the solubility of MS in water by association with the more hydrophobic MS moieties, much like urea, which has also been used to enhance starch dissolution in alkaline aqueous media. As such small molecules do not tend to form micelles in water, they are called hydrotropes rather than surfactants. A minimal hydrotrope concentration (MHC) is necessary for the hydrotropes to structure themselves in water, resulting in a higher solubility of MS. This is the case for the system MS/NaSal/H₂O. Further investigations into the putative hydrotropic dissolution mechanism are necessary.Keywords: hydrotrope, dissolution, maize starch, sodium salicylate, gelatinization
Procedia PDF Downloads 188567 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 109566 Comprehensive Evaluation of Oral and Maxillofacial Radiology in "COVID-19"
Authors: Sahar Heidary, Ramin Ghasemi Shayan
Abstract:
The recent coronavirus disease 2019 (COVID-19) occurrence has carried considerabletrials to the world health system, comprising the training of dental and maxillofacial radiology (DMFR). DMFR will keep avital role in healthcare throughout this disaster. Severe acute breathing disease coronavirus 2 (SARS-CoV-2), the virus producing the current coronavirus disease 2019 (COVID-19) pandemic, is not only extremely contagious but can make solemn consequences in susceptible persons comprising dental patients and dental health care personnel (DHCPs). Reactions to COVID-19 have been available by the Cores for Infection Switch and Inhibition and the American Dental Association, but a more detailed answer is necessary for the harmless preparation of oral and maxillofacial radiology. Our goal is to evaluation the existing information just how the illness threatens patients and DHCPs and how to define which patients are possible to be SARS-CoV-2 infected; study how the usage of private shielding utensils and contamination control measures based on recent top observes, and knowledge can decrease the danger of virus spread in radiologic trials; and scrutinize how intraoral radiography, with its actually superior danger of scattering the infection, might be changed by extraoralradiographic methods for definite diagnostic jobs. In the pandemic, teleradiology has been extensively recycled for diagnostic determinations of COVID-19 patients, for discussions with radiologists in crisis cases, or managing of distance among radiology clinics. Dentists can have the digital radiographic images of their emergency patients through online service area also by electronic message or messaging applications to view in their smart phones, laptops, or other electronic devices.Keywords: radiology, dental, oral, COVID-19, infection
Procedia PDF Downloads 172565 Towards Improved Public Information on Industrial Emissions in Italy: Concepts and Specific Issues Associated to the Italian Experience in IPPC Permit Licensing
Authors: C. Mazziotti Gomez de Teran, D. Fiore, B. Cola, A. Fardelli
Abstract:
The present paper summarizes the analysis of the request for consultation of information and data on industrial emissions made publicly available on the web site of the Ministry of Environment, Land and Sea on integrated pollution prevention and control from large industrial installations, the so called “AIA Portal”. However, since also local Competent Authorities have been organizing their own web sites on IPPC permits releasing procedures for public consultation purposes, as a result, a huge amount of information on national industrial plants is already available on internet, although it is usually proposed as textual documentation or images. Thus, it is not possible to access all the relevant information through interoperability systems and also to retrieval relevant information for decision making purposes as well as rising of awareness on environmental issue. Moreover, since in Italy the number of institutional and private subjects involved in the management of the public information on industrial emissions is substantial, the access to the information is provided on internet web sites according to different criteria; thus, at present it is not structurally homogeneous and comparable. To overcome the mentioned difficulties in the case of the Coordinating Committee for the implementation of the Agreement for the industrial area in Taranto and Statte, operating before the IPPC permit granting procedures of the relevant installation located in the area, a big effort was devoted to elaborate and to validate data and information on characterization of soil, ground water aquifer and coastal sea at disposal of different subjects to derive a global perspective for decision making purposes. Thus, the present paper also focuses on main outcomes matured during such experience.Keywords: public information, emissions into atmosphere, IPPC permits, territorial information systems
Procedia PDF Downloads 285564 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 226563 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 86562 Producing of Amorphous-Nanocrystalline Composite Powders
Authors: K. Tomolya, D. Janovszky, A. Sycheva, M. Sveda, A. Roosz
Abstract:
CuZrAl amorphous alloys have attracted high interest due to unique physical and mechanical properties, which can be enhanced by adding of Ni and Ti elements. It is known that this properties can be enhanced by crystallization of amorphous alloys creating nanocrystallines in the matrix. The present work intends to produce nanosized crystalline parti-cle reinforced amorphous matrix composite powders by crystallization of amorphous powders. As the first step the amorphous powders were synthe-tized by ball-milling of crystalline powders. (Cu49Zr45Al6) 80Ni10Ti10 and (Cu49Zr44Al7) 80Ni10Ti10 (at%) alloys were ball-milled for 12 hours in order to reach the fully amorphous structure. The impact en-ergy of the balls during milling causes the change of the structure in the powders. Scanning electron microscopical (SEM) images shows that the phases mixed first and then changed into a fully amorphous matrix. Furthermore, nanosized particles in the amorphous matrix were crystallized by heat treatment of the amorphous powders that was confirmed by TEM measurement. It was of importance to define the tem-perature when the amorphous phase starts to crystal-lize. Amorphous alloys have a special heating curve and characteristic temperatures, which can be meas-ured by differential scanning calorimetry (DSC). A typical DSC curve of an amorphous alloy exhibits an endothermic event characteristic of the equilibrium glass transition (Tg) and a distinct undercooled liquid region, followed by one or two exothermic events corresponding to crystallization processes (Tp). After measuring the DSC traces of the amorphous powders, the annealing temperatures should be determined between Tx and Tp. In our experiments several temperatures from the annealing temperature range were selected and de-pendency of crystallized nanoparticles fraction on their hardness was investigated.Keywords: amorphous structure, composite, mechanical milling, powder, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), transmission electronmocroscopy (TEM)
Procedia PDF Downloads 450