Search results for: 3D remote sensing images
2324 Lineament Analysis as a Method of Mineral Deposit Exploration
Authors: Dmitry Kukushkin
Abstract:
Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.Keywords: lineaments, mineral exploration, oil and gas, remote sensing
Procedia PDF Downloads 3042323 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method
Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson
Abstract:
Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 1932322 Difference Between Planning Target Volume (PTV) Based Slow-Ct and Internal Target Volume (ITV) Based 4DCT Imaging Techniques in Stereotactic Body Radiotherapy for Lung Cancer: A Comparative Study
Authors: Madhumita Sahu, S. S. Tiwary
Abstract:
The Radiotherapy of Carcinoma Lung has always been difficult and a matter of great concern. The significant movement due to fractional motion caused due to non-rhythmic respiratory motion poses a great challenge for the treatment of Lung cancer using Ionizing Radiation. The present study compares the accuracy in the measurement of Target Volume using Slow-CT and 4DCT Imaging in SBRT for Lung Tumor. The experimental samples were extracted from patients with Lung Cancer who underwent SBRT. Slow-CT and 4DCT images were acquired under free breathing for each patient. PTV were delineated on Slow CT images. Similarly, ITV was also delineated on each of the 4DCT volumes. Volumetric and Statistical analysis were performed for each patient by measuring corresponding PTV and ITV volumes. The study showed (1) The Maximum Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 248.58 cc. (2) The Minimum Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 5.22 cc. (3) The Mean Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 63.21 cc. The present study concludes that irradiated volume ITV with 4DCT is less as compared to the PTV with Slow-CT. A better and more precise treatment could be given more accurately with 4DCT Imaging by sparing 63.21 CC of mean body volume.Keywords: CT imaging, 4DCT imaging, lung cancer, statistical analysis
Procedia PDF Downloads 242321 Introduction of Dams Impacts on Downstream Wetlands: Case Study in Ahwar Delta in Yemen
Authors: Afrah Saad Mohsen Al-Mahfadi
Abstract:
The construction of dams can provide various ecosystem services, but it can also lead to ecological changes such as habitat loss and coastal degradation. Yemen faces multiple risks, including water crises and inadequate environmental policies, which are particularly detrimental to coastal zones like the Ahwar Delta in Abyan. This study aims to examine the impacts of dam construction on downstream wetlands and propose sustainable management approaches. Research Aim: The main objective of this study is to assess the different impacts of dam construction on downstream wetlands, specifically focusing on the Ahwar Delta in Yemen. Methodology: The study utilizes a literature review approach to gather relevant information on dam impacts and adaptation measures. Interviews with decision-making stakeholders and local community members are conducted to gain insights into the specific challenges faced in the Ahwar Delta. Additionally, sensing data, such as Arc-GIS and precipitation data from 1981 to 2020, are analyzed to examine changes in hydrological dynamics. Questions Addressed: This study addresses the following questions: What are the impacts of dam construction on downstream wetlands in the Ahwar delta? How can environmental management planning activities be implemented to minimize these impacts? Findings: The results indicate several future issues arising from dam construction in the coastal areas, including land loss due to rising sea levels and increased salinity in drinking water wells. Climate change has led to a decrease in rainfall rates, impacting vegetation and increasing sedimentation and erosion. Downstream areas with dams exhibit lower sediment levels and slower flowing habitats compared to those without dams. Theoretical Importance: The findings of this study provide valuable insights into the ecological impacts of dam construction on downstream wetlands. Understanding these dynamics can inform decision-makers about the need for adaptation measures and their potential benefits in improving coastal biodiversity under dam impacts. Data Collection and Analysis Procedures: The study collects data through a literature review, interviews, and sensing technology. The literature review helps identify relevant studies on dam impacts and adaptation measures. Interviews with stakeholders and local community members provide firsthand information on the specific challenges faced in the Ahwar Delta. Sensing data, such as Arc-GIS and precipitation data, are analyzed to understand changes in hydrological dynamics over time. Conclusion: The study concludes that while the situation can worsen due to dam construction, practical adaptation measures can help mitigate the impacts. Recommendations include improving water management, developing integrated coastal zone planning, raising awareness among stakeholders, improving health and education, and implementing emergency projects to combat climate change.Keywords: dam impact, delta wetland, hydrology, Yemen
Procedia PDF Downloads 692320 Study of Natural Patterns on Digital Image Correlation Using Simulation Method
Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish
Abstract:
Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size
Procedia PDF Downloads 4202319 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 352318 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement
Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao
Abstract:
Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.Keywords: feature analysis, machine vision, PCA, surface roughness, SVM
Procedia PDF Downloads 2122317 Risk Assessment of Trace Element Pollution in Gymea Bay, NSW, Australia
Authors: Yasir M. Alyazichi, Brian G. Jones, Errol McLean, Hamd N. Altalyan, Ali K. M. Al-Nasrawi
Abstract:
The main purpose of this study is to assess the sediment quality and potential ecological risk in marine sediments in Gymea Bay located in south Sydney, Australia. A total of 32 surface sediment samples were collected from the bay. Current track trajectories and velocities have also been measured in the bay. The resultant trace elements were compared with the adverse biological effect values Effect Range Low (ERL) and Effect Range Median (ERM) classifications. The results indicate that the average values of chromium, arsenic, copper, zinc, and lead in surface sediments all reveal low pollution levels and are below ERL and ERM values. The highest concentrations of trace elements were found close to discharge points and in the inner bay, and were linked with high percentages of clay minerals, pyrite and organic matter, which can play a significant role in trapping and accumulating these elements. The lowest concentrations of trace elements were found to be on the shoreline of the bay, which contained high percentages of sand fractions. It is postulated that the fine particles and trace elements are disturbed by currents and tides, then transported and deposited in deeper areas. The current track velocities recorded in Gymea Bay had the capability to transport fine particles and trace element pollution within the bay. As a result, hydrodynamic measurements were able to provide useful information and to help explain the distribution of sedimentary particles and geochemical properties. This may lead to knowledge transfer to other bay systems, including those in remote areas. These activities can be conducted at a low cost, and are therefore also transferrable to developing countries. The advent of portable instruments to measure trace elements in the field has also contributed to the development of these lower cost and easily applied methodologies available for use in remote locations and low-cost economies.Keywords: current track velocities, gymea bay, surface sediments, trace elements
Procedia PDF Downloads 2452316 Polymer-Layered Gold Nanoparticles: Preparation, Properties and Uses of a New Class of Materials
Authors: S. M. Chabane sari S. Zargou, A.R. Senoudi, F. Benmouna
Abstract:
Immobilization of nano particles (NPs) is the subject of numerous studies pertaining to the design of polymer nano composites, supported catalysts, bioactive colloidal crystals, inverse opals for novel optical materials, latex templated-hollow inorganic capsules, immunodiagnostic assays; “Pickering” emulsion polymerization for making latex particles and film-forming composites or Janus particles; chemo- and biosensors, tunable plasmonic nano structures, hybrid porous monoliths for separation science and technology, biocidal polymer/metal nano particle composite coatings, and so on. Particularly, in the recent years, the literature has witnessed an impressive progress of investigations on polymer coatings, grafts and particles as supports for anchoring nano particles. This is actually due to several factors: polymer chains are flexible and may contain a variety of functional groups that are able to efficiently immobilize nano particles and their precursors by dispersive or van der Waals, electrostatic, hydrogen or covalent bonds. We review methods to prepare polymer-immobilized nano particles through a plethora of strategies in view of developing systems for separation, sensing, extraction and catalysis. The emphasis is on methods to provide (i) polymer brushes and grafts; (ii) monoliths and porous polymer systems; (iii) natural polymers and (iv) conjugated polymers as platforms for anchoring nano particles. The latter range from soft bio macromolecular species (proteins, DNA) to metallic, C60, semiconductor and oxide nano particles; they can be attached through electrostatic interactions or covalent bonding. It is very clear that physicochemical properties of polymers (e.g. sensing and separation) are enhanced by anchored nano particles, while polymers provide excellent platforms for dispersing nano particles for e.g. high catalytic performances. We thus anticipate that the synergetic role of polymeric supports and anchored particles will increasingly be exploited in view of designing unique hybrid systems with unprecedented properties.Keywords: gold, layer, polymer, macromolecular
Procedia PDF Downloads 3912315 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 782314 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 3572313 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors
Authors: Chen Liu, Dawit Negussey
Abstract:
EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.Keywords: geofoam, pressure distribution, tactile pressure sensors, interface
Procedia PDF Downloads 1732312 Study Variation of Blade Angle on the Performance of the Undershot Waterwheel on the Pico Scale
Authors: Warjito, Kevin Geraldo, Budiarso, Muhammad Mizan, Rafi Adhi Pranata, Farhan Rizqi Syahnakri
Abstract:
According to data from 2021, the number of households in Indonesia that have access to on-grid electricity is claimed to have reached 99.28%, which means that around 0.7% of Indonesia's population (1.95 million people) still have no proper access to electricity and 38.1% of it comes from remote areas in Nusa Tenggara Timur. Remote areas are classified as areas with a small population of 30 to 60 families, have limited infrastructure, have scarce access to electricity and clean water, have a relatively weak economy, are behind in access to technological innovation, and earn a living mostly as farmers or fishermen. These people still need electricity but can’t afford the high cost of electricity from national on-grid sources. To overcome this, it is proposed that a hydroelectric power plant driven by a pico-hydro turbine with an undershot water wheel will be a suitable pico-hydro turbine technology because of the design, materials and installation of the turbine that is believed to be easier (i.e., operational and maintenance) and cheaper (i.e., investment and operating costs) than any other type. The comparative study of the angle of the undershot water wheel blades will be discussed comprehensively. This study will look into the best variation of curved blades on an undershot water wheel that produces maximum hydraulic efficiency. In this study, the blade angles were varied by 180 ̊, 160 ̊, and 140 ̊. Two methods of analysis will be used, which are analytical and numerical methods. The analytical method will be based on calculations of the amount of torque and rotational speed of the turbine, which is used to obtain the input and output power of the turbine. Whereas the numerical method will use the ANSYS application to simulate the flow during the collision with the designed turbine blades. It can be concluded, based on the analytical and numerical methods, that the best angle for the blade is 140 ̊, with an efficiency of 43.52% for the analytical method and 37.15% for the numerical method.Keywords: pico hydro, undershot waterwheel, blade angle, computational fluid dynamics
Procedia PDF Downloads 772311 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 482310 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 1102309 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)
Authors: Natalia Lukasik, Ewa Wagner-Wysiecka
Abstract:
Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor
Procedia PDF Downloads 1542308 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.Keywords: classification, computer vision, convolutional neural networks, drone control
Procedia PDF Downloads 2102307 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 1392306 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics
Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez
Abstract:
In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.Keywords: data analysis, emotional domotics, performance improvement, neural network
Procedia PDF Downloads 1402305 The Reasons behind Individuals to Join Terrorist Organizations: Recruitment from Outside
Authors: Murat Sözen
Abstract:
Today terrorism is gaining momentum again. Parallel to this, it hurts more than before because it has victims from not only its own locations but also remote places. As victims are from outside, militants are likewise from own location and outside. What made these individuals join the terrorist organizations and how these organizations recruit militants are still unanswered. The purpose of this work is to find reasons of joining and power of recruiting. In addition, the role of most popular tool of recruiting, ‘social media’ will be examined.Keywords: recruitment, social media, recruitment, militants
Procedia PDF Downloads 3492304 Design and Developing the Infrared Sensor for Detection and Measuring Mass Flow Rate in Seed Drills
Authors: Bahram Besharti, Hossein Navid, Hadi Karimi, Hossein Behfar, Iraj Eskandari
Abstract:
Multiple or miss sowing by seed drills is a common problem on the farm. This problem causes overuse of seeds, wasting energy, rising crop treatment cost and reducing crop yield in harvesting. To be informed of mentioned faults and monitoring the performance of seed drills during sowing, developing a seed sensor for detecting seed mass flow rate and monitoring in a delivery tube is essential. In this research, an infrared seed sensor was developed to estimate seed mass flow rate in seed drills. The developed sensor comprised of a pair of spaced apart circuits one acting as an IR transmitter and the other acting as an IR receiver. Optical coverage in the sensing section was obtained by setting IR LEDs and photo-diodes directly on opposite sides. Passing seeds made interruption in radiation beams to the photo-diode which caused output voltages to change. The voltage difference of sensing units summed by a microcontroller and were converted to an analog value by DAC chip. The sensor was tested by using a roller seed metering device with three types of seeds consist of chickpea, wheat, and alfalfa (representing large, medium and fine seed, respectively). The results revealed a good fitting between voltage received from seed sensor and mass flow of seeds in the delivery tube. A linear trend line was set for three seeds collected data as a model of the mass flow of seeds. A final mass flow model was developed for various size seeds based on receiving voltages from the seed sensor, thousand seed weight and equivalent diameter of seeds. The developed infrared seed sensor, besides monitoring mass flow of seeds in field operations, can be used for the assessment of mechanical planter seed metering unit performance in the laboratory and provide an easy calibrating method for seed drills before planting in the field.Keywords: seed flow, infrared, seed sensor, seed drills
Procedia PDF Downloads 3662303 Process of the Emergence and Evolution of Socio-Cultural Ideas about the "Asian States" In the Context of the Development of US Cinema in 1941-1945
Authors: Selifontova Darya Yurievna
Abstract:
The study of the process of the emergence and evolution of socio-cultural ideas about the "Asian states" in the context of the development of US cinema in 1941-1945 will contribute both to the approbation of a new approach to the classical subject and will allow using the methodological tools of history, political science, philology, sociology for understanding modern military-political, historical, ideological, socio-cultural processes on a concrete example. This is especially important for understanding the process of constructing the image of the Japanese Empire in the USA. Assessments and images of China and Japan in World War II, created in American cinema, had an immediate impact on the media, public sentiment, and opinions. During the war, the US cinema created new myths and actively exploited old ones, combining them with traditional Hollywood cliches - all this served as a basis for creating the image of China and the Japanese Empire on the screen, which were necessary to solve many foreign policy and domestic political tasks related to the construction of two completely different, but at the same time, similar images of Asia (China and the Japanese Empire). In modern studies devoted to the history of wars, the study of the specifics of the information confrontation of the parties is in demand. A special role in this confrontation is played by propaganda through cinema, which uses images, historical symbols, and stable metaphors, the appeal to which can form a certain public reaction. Soviet documentaries of the war years are proof of this. The relevance of the topic is due to the fact that cinema as a means of propaganda was very popular and in demand during the Second World War. This period was the time of creation of real masterpieces in the field of propaganda films, in the documentary space of the cinema of 1941 – 1945. The traditions of depicting the Second World War were laid down. The study of the peculiarities of visualization and mythologization of the Second World War in Soviet cinema is the most important stage for studying the development of the specifics of propaganda methods since the methods and techniques of depicting the war formed in 1941-1945 are also significant at the present stage of the study of society.Keywords: asian countries, politics, sociology, domestic politics, USA, cinema
Procedia PDF Downloads 1272302 The Ugliness of Eating: Resistance to Depicting Consumption in Visual Arts
Authors: Constance Kirker
Abstract:
While there is general agreement that food itself can be beautiful, thousands of still-life masterpieces over the years attest to this, depicting the act of eating, actually placing food in one’s mouth and chewing is seemingly taboo. The environment created around consumption -dining rooms, linens, china, flowers- is consciously choreographed to provide a pleasing aesthetic experience. Yet artists, from Roman frescoes painters to contemporary photographers, create images from feasts to solitary subjects that rarely show food or drink touching lips, chewing, or swallowing. Of the countless paintings of the Last Supper, the food remains on the table. Rarely is Adam or Eve shown taking a bite of the apple, initiating Original Sin. In the few examples that do depict food-in-mouth, Goya’s Saturn Devouring His Son, or the ubiquitous photos of the “wedding smash” with brides and grooms pushing wedding cake into each other’s mouths, the images are seemingly intended to be particularly ugly or humorous in a distasteful way. This paper will explore theories that include the rules of etiquette, some determined hundreds of years ago and still followed today, that imply eating is a metaphor for gluttony, implicit sexuality of eating, the distortion of the face while eating and the simple practicality of the difficulty of an artist’s model maintaining a chewing position. If art is a reflection of society, what drives the universal impulse to hide this very human function?Keywords: aesthetics, senses, taboo, consumption
Procedia PDF Downloads 732301 Unspoken Playground Rules Prompt Adolescents to Avoid Physical Activity: A Focus Group Study of Constructs in the Prototype Willingness Model
Authors: Catherine Wheatley, Emma L. Davies, Helen Dawes
Abstract:
The health benefits of exercise are widely recognised, but numerous interventions have failed to halt a sharp decline in physical activity during early adolescence. Many such projects are underpinned by the Theory of Planned Behaviour, yet this model of rational decision-making leaves variance in behavior unexplained. This study investigated whether the Prototype Willingness Model, which proposes a second, reactive decision-making path to account for spontaneous responses to the social environment, has potential to improve understanding of adolescent exercise behaviour in school by exploring constructs in the model with young people. PE teachers in 4 Oxfordshire schools each nominated 6 pupils who were active in school, and 6 who were inactive, to participate in the study. Of these, 45 (22 male) aged 12-13 took part in 8 focus group discussions. These were transcribed and subjected to deductive thematic analysis to search for themes relating to the prototype willingness model. Participants appeared to make rational decisions about commuting to school or attending sports clubs, but spontaneous choices to be inactive during both break and PE. These reactive decisions seemed influenced by a social context described as more ‘judgmental’ than primary school, characterised by anxiety about physical competence, negative peer evaluation and inactive playground norms. Participants described their images of typical active and inactive adolescents: active images included negative social characteristics including ‘show-off’. There was little concern about the long-term risks of inactivity, although participants seemed to recognise that physical activity is healthy. The Prototype Willingness Model might more fully explain young adolescents’ physical activity in school than rational behavioural models, indicating potential for physical activity interventions that target social anxieties in response to the changing playground environment. Images of active types could be more complex than earlier research has suggested, and their negative characteristics might influence willingness to be active.Keywords: adolescence, physical activity, prototype willingness model, school
Procedia PDF Downloads 3462300 Multimodal Analysis of News Magazines' Front-Page Portrayals of the US, Germany, China, and Russia
Authors: Alena Radina
Abstract:
On the global stage, national image is shaped by historical memory of wars and alliances, government ideology and particularly media stereotypes which represent countries in positive or negative ways. News magazine covers are a key site for national representation. The object of analysis in this paper is the portrayals of the US, Germany, China, and Russia in the front pages and cover stories of “Time”, “Der Spiegel”, “Beijing Review”, and “Expert”. Political comedy helps people learn about current affairs even if politics is not their area of interest, and thus satire indirectly sets the public agenda. Coupled with satirical messages, cover images and the linguistic messages embedded in the covers become persuasive visual and verbal factors, known to drive about 80% of magazine sales. Preliminary analysis identified satirical elements in magazine covers, which are known to influence and frame understandings and attract younger audiences. Multimodal and transnational comparative framing analyses lay the groundwork to investigate why journalists, editors and designers deploy certain frames rather than others. This research investigates to what degree frames used in covers correlate with frames within the cover stories and what these framings can tell us about media professionals’ representations of their own and other nations. The study sample includes 32 covers consisting of two covers representing each of the four chosen countries from the four magazines. The sampling framework considers two time periods to compare countries’ representation with two different presidents, and between men and women when present. The countries selected for analysis represent each category of the international news flows model: the core nations are the US and Germany; China is a semi-peripheral country; and Russia is peripheral. Examining textual and visual design elements on the covers and images in the cover stories reveals not only what editors believe visually attracts the reader’s attention to the magazine but also how the magazines frame and construct national images and national leaders. The cover is the most powerful editorial and design page in a magazine because images incorporate less intrusive framing tools. Thus, covers require less cognitive effort of audiences who may therefore be more likely to accept the visual frame without question. Analysis of design and linguistic elements in magazine covers helps to understand how media outlets shape their audience’s perceptions and how magazines frame global issues. While previous multimodal research of covers has focused mostly on lifestyle magazines or newspapers, this paper examines the power of current affairs magazines’ covers to shape audience perception of national image.Keywords: framing analysis, magazine covers, multimodality, national image, satire
Procedia PDF Downloads 1022299 Change of Taste Preference after Bariatric Surgery
Authors: Piotr Tylec, Julia Wierzbicka, Natalia Gajewska, Krzysztof Przeczek, Grzegorz Torbicz, Alicja Dudek, Magdalena Pisarska-Adamczyk, Mateusz Wierdak, Michal Pedziwiatr
Abstract:
Introduction: Many patients have described changes in taste perception after weight loss surgery. However, little data is available about short term changes in taste after surgery. Aim: We aimed to evaluate short-term changes in taste preference after bariatric surgeries in comparison to colorectal surgeries. Material and Methods: Between April 2018 and April 2019, a total of 121 bariatric patients and 63 controls participated. Bariatric patients underwent laparoscopic sleeve gastrectomy or Roux-en-Y gastric by-pass. Controls underwent oncological colorectal surgeries. Patients who developed clinical complications requiring restriction of oral intake after surgery or withdraw their consent were excluded from the study. In the end, 85 bariatric patients and 44 controls were included. In all of them, the 16-item ERAS Protocol was applied. Using 10-points Numeric Rating Scale (1-10) patients completed questionnaire and rated their appetite and thirst (1 - no appetite/not thirsty, 10 – normal appetite/very thirsty) and flavoured standardized liquids' taste (1- horrible, 10-very tasty) and food images for the 6 group of taste (sweet, umami, sour, spicy, bitter and salty) (1 - not appetizing, 10 - very appetizing) preoperatively and on the first postoperative day. Data were analysed with Statistica 13.0 PL. Results: Analysed group consist of 129 patients (85 bariatric, 44 controls). Mean age and BMI in a research group was 44.91 years old, 46.22 kg/m² and in control group 62.09 years old, 25.87 kg/m², respectively. Our analysis revealed significant differences in changes of appetite between both groups (research: -4.55 ± 3.76 vs. control: -0.85 ± 4.37; p < 0.05), ratings bitter (research: 0.60 ± 2.98 vs. control: -0.88 ± 2.58; p < 0.05) and salty (research: 1.20 ± 3.50 vs. control: -0.52 ± 2.90; p < 0.05) flavoured liquids and ratings for sweet (research: 1.62 ± 3.31 vs. control: 0.01 ± 2.63; p < 0.05) and bitter (research: 1.21 ± 3.15 vs. control: -0.09 ± 2.25; p < 0.05) food images. There were statistically significant results in the ratings of other images, but in comparison to the control group, they were not statistically significant. Conclusion: The study showed that bariatric surgeries quickly decreases appetite and desire to eat certain types of food, such as salty. Moreover, the bitter taste was more desirable in the research group in comparison to control group. Nevertheless, the sweet taste was more appetible in the bariatric group than in control.Keywords: bariatric surgery, general surgery, obesity, taste preference
Procedia PDF Downloads 1352298 Classification of Multiple Cancer Types with Deep Convolutional Neural Network
Authors: Nan Deng, Zhenqiu Liu
Abstract:
Thousands of patients with metastatic tumors were diagnosed with cancers of unknown primary sites each year. The inability to identify the primary cancer site may lead to inappropriate treatment and unexpected prognosis. Nowadays, a large amount of genomics and transcriptomics cancer data has been generated by next-generation sequencing (NGS) technologies, and The Cancer Genome Atlas (TCGA) database has accrued thousands of human cancer tumors and healthy controls, which provides an abundance of resource to differentiate cancer types. Meanwhile, deep convolutional neural networks (CNNs) have shown high accuracy on classification among a large number of image object categories. Here, we utilize 25 cancer primary tumors and 3 normal tissues from TCGA and convert their RNA-Seq gene expression profiling to color images; train, validate and test a CNN classifier directly from these images. The performance result shows that our CNN classifier can archive >80% test accuracy on most of the tumors and normal tissues. Since the gene expression pattern of distant metastases is similar to their primary tumors, the CNN classifier may provide a potential computational strategy on identifying the unknown primary origin of metastatic cancer in order to plan appropriate treatment for patients.Keywords: bioinformatics, cancer, convolutional neural network, deep leaning, gene expression pattern
Procedia PDF Downloads 2992297 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 2702296 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 1192295 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions
Authors: Aidan Battison, Neliswa Mama
Abstract:
Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole
Procedia PDF Downloads 163