Search results for: leakage detection
678 Effects of Environmental Parameters on Salmonella Contaminated in Harvested Oysters (Crassostrea lugubris and Crassostrea belcheri)
Authors: Varangkana Thaotumpitak, Jarukorn Sripradite, Saharuetai Jeamsripong
Abstract:
Environmental contamination from wastewater discharges originated from anthropogenic activities introduces the accumulation of enteropathogenic bacteria in aquatic animals, especially in oysters, and in shellfish harvesting areas. The consumption of raw or partially cooked oysters can be a risk for seafood-borne diseases in human. This study aimed to evaluate the relationship between the presence of Salmonella in oyster meat samples, and environmental factors (ambient air temperature, relative humidity, gust wind speed, average wind speed, tidal condition, precipitation and season) by using the principal component analysis (PCA). One hundred and forty-four oyster meat samples were collected from four oyster harvesting areas in Phang Nga province, Thailand from March 2016 to February 2017. The prevalence of Salmonella of each site was ranged from 25.0-36.11% in oyster meat. The results of PCA showed that ambient air temperature, relative humidity, and precipitation were main factors correlated with Salmonella detection in these oysters. Positive relationship was observed between positive Salmonella in the oysters and relative humidity (PC1=0.413) and precipitation (PC1=0.607), while the negative association was found between ambient air temperature (PC1=0.338) and the presence of Salmonella in oyster samples. These results suggested that lower temperature and higher precipitation and higher relative humidity will possibly effect on Salmonella contamination of oyster meat. During the high risk period, harvesting of oysters should be prohibited to reduce pathogenic bacteria contamination and to minimize a hazard of humans from Salmonellosis.Keywords: oyster, Phang Nga Bay, principal component analysis, Salmonella
Procedia PDF Downloads 132677 Synthesis of Human Factors Theories and Industry 4.0
Authors: Andrew Couch, Nicholas Loyd, Nathan Tenhundfeld
Abstract:
The rapid emergence of technology observably induces disruptive effects that carry implications for internal organizational dynamics as well as external market opportunities, strategic pressures, and threats. An examination of the historical tendencies of technology innovation shows that the body of managerial knowledge for addressing such disruption is underdeveloped. Fundamentally speaking, the impacts of innovation are unique and situationally oriented. Hence, the appropriate managerial response becomes a complex function that depends on the nature of the emerging technology, the posturing of internal organizational dynamics, the rate of technological growth, and much more. This research considers a particular case of mismanagement, the BP Texas City Refinery explosion of 2005, that carries notable discrepancies on the basis of human factors principles. Moreover, this research considers the modern technological climate (shaped by Industry 4.0 technologies) and seeks to arrive at an appropriate conceptual lens by which human factors principles and Industry 4.0 may be favorably integrated. In this manner, the careful examination of these phenomena helps to better support the sustainment of human factors principles despite the disruptive impacts that are imparted by technological innovation. In essence, human factors considerations are assessed through the application of principles that stem from usability engineering, the Swiss Cheese Model of accident causation, human-automation interaction, signal detection theory, alarm design, and other factors. Notably, this stream of research supports a broader framework in seeking to guide organizations amid the uncertainties of Industry 4.0 to capture higher levels of adoption, implementation, and transparency.Keywords: Industry 4.0, human factors engineering, management, case study
Procedia PDF Downloads 68676 Structural Damage Detection in a Steel Column-Beam Joint Using Piezoelectric Sensors
Authors: Carlos H. Cuadra, Nobuhiro Shimoi
Abstract:
Application of piezoelectric sensors to detect structural damage due to seismic action on building structures is investigated. Plate-type piezoelectric sensor was developed and proposed for this task. A film-type piezoelectric sheet was attached on a steel plate and covered by a layer of glass. A special glue is used to fix the glass. This glue is a silicone that requires the application of ultraviolet rays for its hardening. Then, the steel plate was set up at a steel column-beam joint of a test specimen that was subjected to bending moment when test specimen is subjected to monotonic load and cyclic load. The structural behavior of test specimen during cyclic loading was verified using a finite element model, and it was found good agreement between both results on load-displacement characteristics. The cross section of steel elements (beam and column) is a box section of 100 mm×100 mm with a thin of 6 mm. This steel section is specified by the Japanese Industrial Standards as carbon steel square tube for general structure (STKR400). The column and beam elements are jointed perpendicularly using a fillet welding. The resulting test specimen has a T shape. When large deformation occurs the glass plate of the sensor device cracks and at that instant, the piezoelectric material emits a voltage signal which would be the indicator of a certain level of deformation or damage. Applicability of this piezoelectric sensor to detect structural damages was verified; however, additional analysis and experimental tests are required to establish standard parameters of the sensor system.Keywords: piezoelectric sensor, static cyclic test, steel structure, seismic damages
Procedia PDF Downloads 123675 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India
Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader
Abstract:
Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation
Procedia PDF Downloads 142674 Stereo Motion Tracking
Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox
Procedia PDF Downloads 327673 Doped and Co-doped ZnO Based Nanoparticles and their Photocatalytic and Gas Sensing Property
Authors: Neha Verma, Manik Rakhra
Abstract:
Statement of the Problem: Nowadays, a tremendous increase in population and advanced industrialization augment the problems related to air and water pollutions. Growing industries promoting environmental danger, which is an alarming threat to the ecosystem. For safeguard, the environment, detection of perilous gases and release of colored wastewater is required for eutrophication pollution. Researchers around the globe are trying their best efforts to save the environment. For this remediation advanced oxidation process is used for potential applications. ZnO is an important semiconductor photocatalyst with high photocatalytic and gas sensing activities. For efficient photocatalytic and gas sensing properties, it is necessary to prepare a doped/co-doped ZnO compound to decrease the electron-hole recombination rates. However, lanthanide doped and co-doped metal oxide is seldom studied for photocatalytic and gas sensing applications. The purpose of this study is to describe the best photocatalyst for the photodegradation of dyes and gas sensing properties. Methodology & Theoretical Orientation: Economical framework has to be used for the synthesis of ZnO. In the depth literature survey, a simple combustion method is utilized for gas sensing and photocatalytic activities. Findings: Rare earth doped and co-doped ZnO nanoparticles were the best photocatalysts for photodegradation of organic dyes and different gas sensing applications by varying various factors such as pH, aging time, and different concentrations of doping and codoping metals in ZnO. Complete degradation of dye was observed only in min. Gas sensing nanodevice showed a better response and quick recovery time for doped/co-doped ZnO. Conclusion & Significance: In order to prevent air and water pollution, well crystalline ZnO nanoparticles were synthesized by rapid and economic method, which is used as photocatalyst for photodegradation of organic dyes and gas sensing applications to sense the release of hazardous gases from the environment.Keywords: ZnO, photocatalyst, photodegradation of dye, gas sensor
Procedia PDF Downloads 155672 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 185671 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality
Procedia PDF Downloads 194670 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors
Authors: Saeed Vahedikamal, Ian Hepburn
Abstract:
Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID
Procedia PDF Downloads 98669 Biophysical Modeling of Anisotropic Brain Tumor Growth
Authors: Mutaz Dwairy
Abstract:
Solid tumors have high interstitial fluid pressure (IFP), high mechanical stress, and low oxygen levels. Solid stresses may induce apoptosis, stimulate the invasiveness and metastasis of cancer cells, and lower their proliferation rate, while oxygen concentration may affect the response of cancer cells to treatment. Although tumors grow in a nonhomogeneous environment, many existing theoretical models assume homogeneous growth and tissue has uniform mechanical properties. For example, the brain consists of three primary materials: white matter, gray matter, and cerebrospinal fluid (CSF). Therefore, tissue inhomogeneity should be considered in the analysis. This study established a physical model based on convection-diffusion equations and continuum mechanics principles. The model considers the geometrical inhomogeneity of the brain by including the three different matters in the analysis: white matter, gray matter, and CSF. The model also considers fluid-solid interaction and explicitly describes the effect of mechanical factors, e.g., solid stresses and IFP, chemical factors, e.g., oxygen concentration, and biological factors, e.g., cancer cell concentration, on growing tumors. In this article, we applied the model on a brain tumor positioned within the white matter, considering the brain inhomogeneity to estimate solid stresses, IFP, the cancer cell concentration, oxygen concentration, and the deformation of the tissues within the neoplasm and the surrounding. Tumor size was estimated at different time points. This model might be clinically crucial for cancer detection and treatment planning by measuring mechanical stresses, IFP, and oxygen levels in the tissue.Keywords: biomechanical model, interstitial fluid pressure, solid stress, tumor microenvironment
Procedia PDF Downloads 47668 The Development of Liquid Chromatography Tandem Mass Spectrometry Method for Citrinin Determination in Dry-Fermented Meat Products
Authors: Ana Vulic, Tina Lesic, Nina Kudumija, Maja Kis, Manuela Zadravec, Nada Vahcic, Tomaz Polak, Jelka Pleadin
Abstract:
Mycotoxins are toxic secondary metabolites produced by numerous types of molds. They can contaminate both food and feed so that they represent a serious public health concern. Production of dry-fermented meat products involves ripening, during which molds can overgrow the product surface, produce mycotoxins, and consequently contaminate the final product. Citrinin is a mycotoxin produced mainly by the Penicillium citrinum. Data on citrinin occurrence in both food and feed are limited. Therefore, there is a need for research on citrinin occurrence in these types of meat products. The LC-MS/MS method for citrinin determination was developed and validated. Sample preparation was performed using immunoaffinity columns, which resulted in clean sample extracts. Method validation included the determination of the limit of detection (LOD), the limit of quantification (LOQ), recovery, linearity, and matrix effect in accordance to the latest validation guidance. The determined LOD and LOQ were 0.60 µg/kg and 1.98 µg/kg, respectively, showing a good method sensitivity. The method was tested for its linearity in the calibration range of 1 µg/L to 10 µg/L. The recovery was 100.9 %, while the matrix effect was 0.7 %. This method was employed in the analysis of 47 samples of dry-fermented sausages collected from local households. Citrinin wasn’t detected in any of these samples, probably because of the short ripening period of the tested sausages that takes three months tops. The developed method shall be used to test other types of traditional dry-cured products, such as prosciuttos, whose surface is usually more heavily overgrown by surface molds due to the longer ripening period.Keywords: citrinin, dry-fermented meat products, LC-MS/MS, mycotoxins
Procedia PDF Downloads 122667 Chemometric-Based Voltammetric Method for Analysis of Vitamins and Heavy Metals in Honey Samples
Authors: Marwa A. A. Ragab, Amira F. El-Yazbi, Amr El-Hawiet
Abstract:
The analysis of heavy metals in honey samples is crucial. When found in honey, they denote environmental pollution. Some of these heavy metals as lead either present at low or high concentrations are considered to be toxic. Other heavy metals, for example, copper and zinc, if present at low concentrations, they considered safe even vital minerals. On the contrary, if they present at high concentrations, they are toxic. Their voltammetric determination in honey represents a challenge due to the presence of other electro-active components as vitamins, which may overlap with the peaks of the metal, hindering their accurate and precise determination. The simultaneous analysis of some vitamins: nicotinic acid (B3) and riboflavin (B2), and heavy metals: lead, cadmium, and zinc, in honey samples, was addressed. The analysis was done in 0.1 M Potassium Chloride (KCl) using a hanging mercury drop electrode (HMDE), followed by chemometric manipulation of the voltammetric data using the derivative method. Then the derivative data were convoluted using discrete Fourier functions. The proposed method allowed the simultaneous analysis of vitamins and metals though their varied responses and sensitivities. Although their peaks were overlapped, the proposed chemometric method allowed their accurate and precise analysis. After the chemometric treatment of the data, metals were successfully quantified at low levels in the presence of vitamins (1: 2000). The heavy metals limit of detection (LOD) values after the chemometric treatment of data decreased by more than 60% than those obtained from the direct voltammetric method. The method applicability was tested by analyzing the selected metals and vitamins in real honey samples obtained from different botanical origins.Keywords: chemometrics, overlapped voltammetric peaks, derivative and convoluted derivative methods, metals and vitamins
Procedia PDF Downloads 150666 Structural Health Monitoring of the 9-Story Torre Central Building Using Recorded Data and Wave Method
Authors: Tzong-Ying Hao, Mohammad T. Rahmani
Abstract:
The Torre Central building is a 9-story shear wall structure located in Santiago, Chile, and has been instrumented since 2009. Events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded, and thus the building can serve as a full-scale benchmark to evaluate the structural health monitoring method developed. The first part of this article presents an analysis of inter-story drifts, and of changes in the first system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) as baseline indicators of the occurrence of damage. During 2010 Chile earthquake the system frequencies were detected decreasing approximately 24% in the EW and 27% in NS motions. Near the end of shaking, an increase of about 17% in the EW motion was detected. The structural health monitoring (SHM) method based on changes in wave traveling time (wave method) within a layered shear beam model of structure is presented in the second part of this article. If structural damage occurs the velocity of wave propagated through the structure changes. The wave method measures the velocities of shear wave propagation from the impulse responses generated by recorded data at various locations inside the building. Our analysis and results show that the detected changes in wave velocities are consistent with the observed damages. On this basis, the wave method is proven for actual implementation in structural health monitoring systems.Keywords: Chile earthquake, damage detection, earthquake response, impulse response, layered shear beam, structural health monitoring, Torre Central building, wave method, wave travel time
Procedia PDF Downloads 364665 HIV Incidence among Men Who Have Sex with Men Measured by Pooling Polymerase Chain Reaction, and Its Comparison with HIV Incidence Estimated by BED-Capture Enzyme-Linked Immunosorbent Assay and Observed in a Prospective Cohort
Authors: Mei Han, Jinkou Zhao, Yuan Yao, Liangui Feng, Xianbin Ding, Guohui Wu, Chao Zhou, Lin Ouyang, Rongrong Lu, Bo Zhang
Abstract:
To compare the HIV incidence estimated using BED capture enzyme linked immunosorbent assay (BED-CEIA) and observed in a cohort against the HIV incidence among men who have sex with men (MSM) measured by pooling polymerase chain reaction (pooling-PCR). A total of 617 MSM subjects were included in a respondent driven sampling survey in Chongqing in 2008. Among the 129 that were tested HIV antibody positive, 102 were defined with long-term infection, 27 were assessed for recent HIV infection (RHI) using BED-CEIA. The remaining 488 HIV negative subjects were enrolled to the prospective cohort and followed-up every 6 months to monitor HIV seroconversion. All of the 488 HIV negative specimens were assessed for acute HIV infection (AHI) using pooling-PCR. Among the 488 negative subjects in the open cohort, 214 (43.9%) were followed-up for six months, with 107 person-years of observation and 14 subjects seroconverted. The observed HIV incidence was 12.5 per 100 person-years (95% CI=9.1-15.7). Among the 488 HIV negative specimens, 5 were identified with acute HIV infection using pooling-PCR at an annual rate of 14.02% (95% CI=1.73-26.30). The estimated HIV-1 incidence was 12.02% (95% CI=7.49-16.56) based on BED-CEIA. The HIV incidence estimated with three different approaches was different among subgroups. In the highly HIV prevalent MSM, it costs US$ 1724 to detect one AHI case, while detection of one case of RHI with BED assay costs only US$ 42. Three approaches generated comparable and high HIV incidences, pooling PCR and prospective cohort are more close to the true level of incidence, while BED-CEIA seemed to be the most convenient and economical approach for at-risk population’s HIV incidence evaluation at the beginning of HIV pandemic. HIV-1 incidences were alarmingly high among MSM population in Chongqing, particularly within the subgroup under 25 years of age and those migrants aged between 25 to 34 years.Keywords: BED-CEIA, HIV, incidence, pooled PCR, prospective cohort
Procedia PDF Downloads 411664 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 362663 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm
Authors: Belgherbi Aicha, Bessaid Abdelhafid
Abstract:
In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 325662 Fatal Attractions: Exploiting Olfactory Communication between Invasive Predators for Conservation
Authors: Patrick M. Garvey, Roger P. Pech, Daniel M. Tompkins
Abstract:
Competition is a widespread interaction and natural selection will encourage the development of mechanisms that recognise and respond to dominant competitors, if this information reduces the risk of a confrontation. As olfaction is the primary sense for most mammals, our research tested whether olfactory ‘eavesdropping’ mediates alien species interactions and whether we could exploit our understanding of this behaviour to create ‘super-lures’. We used a combination of pen and field experiments to evaluate the importance of this behaviour. In pen trials, stoats (Mustela erminea) were exposed to the body odour of three dominant predators (cat / ferret / African wild dog) and these scents were found to be attractive. A subsequent field trial tested whether attraction displayed towards predator odour, particularly ferret (Mustela furo) pheromones, could be replicated with invasive predators in the wild. We found that ferret odour significantly improved detection and activity of stoats and hedgehogs (Erinaceus europaeus), while also improving detections of ship rats (Rattus rattus). Our current research aims to identify the key components of ferret odour, using chemical analysis and behavioural experiments, so that we can produce ‘scent from a can’. A lure based on a competitors’ odour would be beneficial in many circumstances including: (i) where individuals display variability in attraction to food lures, (ii) there are plentiful food resources available, (iii) new immigrants arrive into an area, (iv) long-life lures are required. Pest management can therefore benefit by exploiting behavioural responses to odours to achieve conservation goals.Keywords: predator interactions, invasive species, eavesdropping, semiochemicals
Procedia PDF Downloads 412661 Body Farming in India and Asia
Authors: Yogesh Kumar, Adarsh Kumar
Abstract:
A body farm is a research facility where research is done on forensic investigation and medico-legal disciplines like forensic entomology, forensic pathology, forensic anthropology, forensic archaeology, and related areas of forensic veterinary. All the research is done to collect data on the rate of decomposition (animal and human) and forensically important insects to assist in crime detection. The data collected is used by forensic pathologists, forensic experts, and other experts for the investigation of crime cases and further research. The research work includes different conditions of a dead body like fresh, bloating, decay, dry, and skeleton, and data on local insects which depends on the climatic conditions of the local areas of that country. Therefore, it is the need of time to collect appropriate data in managed conditions with a proper set-up in every country. Hence, it is the duty of the scientific community of every country to establish/propose such facilities for justice and social management. The body farms are also used for training of police, military, investigative dogs, and other agencies. At present, only four countries viz. U.S., Australia, Canada, and Netherlands have body farms and related facilities in organised manner. There is no body farm in Asia also. In India, we have been trying to establish a body farm in A&N Islands that is near Singapore, Malaysia, and some other Asian countries. In view of the above, it becomes imperative to discuss the matter with Asian countries to collect the data on decomposition in a proper manner by establishing a body farm. We can also share the data, knowledge, and expertise to collaborate with one another to make such facilities better and have good scientific relations to promote science and explore ways of investigation at the world level.Keywords: body farm, rate of decomposition, forensically important flies, time since death
Procedia PDF Downloads 87660 The Role of Artificial Intelligence in Criminal Procedure
Authors: Herke Csongor
Abstract:
The artificial intelligence (AI) has been used in the United States of America in the decisionmaking process of the criminal justice system for decades. In the field of law, including criminal law, AI can provide serious assistance in decision-making in many places. The paper reviews four main areas where AI still plays a role in the criminal justice system and where it is expected to play an increasingly important role. The first area is the predictive policing: a number of algorithms are used to prevent the commission of crimes (by predicting potential crime locations or perpetrators). This may include the so-called linking hot-spot analysis, crime linking and the predictive coding. The second area is the Big Data analysis: huge amounts of data sets are already opaque to human activity and therefore unprocessable. Law is one of the largest producers of digital documents (because not only decisions, but nowadays the entire document material is available digitally), and this volume can only and exclusively be handled with the help of computer programs, which the development of AI systems can have an increasing impact on. The third area is the criminal statistical data analysis. The collection of statistical data using traditional methods required enormous human resources. The AI is a huge step forward in that it can analyze the database itself, based on the requested aspects, a collection according to any aspect can be available in a few seconds, and the AI itself can analyze the database and indicate if it finds an important connection either from the point of view of crime prevention or crime detection. Finally, the use of AI during decision-making in both investigative and judicial fields is analyzed in detail. While some are skeptical about the future role of AI in decision-making, many believe that the question is not whether AI will participate in decision-making, but only when and to what extent it will transform the current decision-making system.Keywords: artificial intelligence, international criminal cooperation, planning and organizing of the investigation, risk assessment
Procedia PDF Downloads 38659 Oxygen-Tolerant H₂O₂ Reduction Catalysis by Iron Phosphate Coated Iron Oxides
Authors: Chia-Ting Chang, Chia-Yu Lin
Abstract:
We report on the decisive role of iron phosphate (FePO₄), formed in-situ during the electrochemical characterization, played in the electrocatalytic activity, especially its oxygen tolerance of iron oxides towards H₂O₂ reduction. Iron oxides studied including, Nanorod arrays (NRs) of β-FeOOH, γ-Fe₂O₃, α-Fe₂O₃, α-Fe₂O₃ nanosheets (α-Fe₂O₃NS), α-Fe₂O₃ nanoparticles (α-Fe₂O₃NP), were synthesized using chemical bath deposition. The nanostructure was controlled simply by adjusting the composition of precursor solution and reaction duration for CBD process, whereas the crystal phase was controlled by adjusting the annealing temperature. It was found that iron phosphate (FePO₄) was deposited in-situ onto the surface of this nanostructured α-Fe₂O₃ during the electrochemical pretreatment in the phosphate electrolyte, and both FePO₄ and α-Fe₂O₃ showed the activity in catalysing the electrochemical reduction of H₂O₂. In addition, the interaction/compatibility between deposited FePO₄ and iron oxides has a decisive effect on the overall electrocatalytic activity of the resultant electrodes; FePO₄ only showed synergetic effect on the overall electrocatalytic activity of α-Fe₂O₃NR and α-Fe2O₃NS. Both α-Fe₂O₃NR and α-Fe₂O₃NS showed two reduction peaks in phosphate electrolyte containing H₂O₂, one being pH-dependent and related to the electrocatalytic properties of FePO₄, and the other one being pH-independent and only related to the intrinsic electrocatalytic properties of α-Fe₂O₃NR and α-Fe₂O₃NS. However, all iron oxides showed only one pH-independent reductive peak in non-phosphate electrolyte containing H₂O₂. The synergesitic catalysis exerted by FePO₄ with α-Fe₂O₃NR or α-Fe₂O₃NS providing additional oxygen-insensitive active site for H₂O₂ reduction, which allows their applications to electrochemical detection of H₂O₂ without the interference of O₂ involving in oxidase-catalyzed chemical processes.Keywords: H₂O₂ reduction, Iron oxide, iron phosphate, O₂ tolerance
Procedia PDF Downloads 415658 Symmetry Properties of Linear Algebraic Systems with Non-Canonical Scalar Multiplication
Authors: Krish Jhurani
Abstract:
The research paper presents an in-depth analysis of symmetry properties in linear algebraic systems under the operation of non-canonical scalar multiplication structures, specifically semirings, and near-rings. The objective is to unveil the profound alterations that occur in traditional linear algebraic structures when we replace conventional field multiplication with these non-canonical operations. In the methodology, we first establish the theoretical foundations of non-canonical scalar multiplication, followed by a meticulous investigation into the resulting symmetry properties, focusing on eigenvectors, eigenspaces, and invariant subspaces. The methodology involves a combination of rigorous mathematical proofs and derivations, supplemented by illustrative examples that exhibit these discovered symmetry properties in tangible mathematical scenarios. The core findings uncover unique symmetry attributes. For linear algebraic systems with semiring scalar multiplication, we reveal eigenvectors and eigenvalues. Systems operating under near-ring scalar multiplication disclose unique invariant subspaces. These discoveries drastically broaden the traditional landscape of symmetry properties in linear algebraic systems. With the application of these findings, potential practical implications span across various fields such as physics, coding theory, and cryptography. They could enhance error detection and correction codes, devise more secure cryptographic algorithms, and even influence theoretical physics. This expansion of applicability accentuates the significance of the presented research. The research paper thus contributes to the mathematical community by bringing forth perspectives on linear algebraic systems and their symmetry properties through the lens of non-canonical scalar multiplication, coupled with an exploration of practical applications.Keywords: eigenspaces, eigenvectors, invariant subspaces, near-rings, non-canonical scalar multiplication, semirings, symmetry properties
Procedia PDF Downloads 123657 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle
Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu
Abstract:
Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle
Procedia PDF Downloads 144656 Development and Pre-clinical Evaluation of New ⁶⁴Cu-NOTA-Folate Conjugates for PET Imaging of Folate Receptor-Positive Tumors
Authors: Norah Al Hokbany, Ibrahim Al Jammaz, Basem Al Otaibi, Yousif Al Malki, Subhani M. Okarvi
Abstract:
Objective: The folate receptor is over-expressed in a wide variety of human tumors. Conjugates of folate have been shown to be selectively taken up by tumor cells via the folate receptor. In an attempt to develop new folate radiotracers with favorable biochemical properties for detecting folate receptor-positive cancers. Methods: we synthesized ⁶⁴Cu-NOTA- and ⁶⁴Cu-NOTAM-folate conjugates using a straightforward and simple one-step reaction. Radiochemical yields were greater than 95% (decay-corrected) with a total synthesis time of less than 20 min. Results: Radiochemical purities were always greater than 98% without high-performance liquid chromatography (HPLC) purification. These synthetic approaches hold considerable promise as a rapid and simple method for ⁶⁴Cu-folate conjugate preparation with high radiochemical yield in a short synthesis time. In vitro tests on the KB cell line showed that significant amounts of the radio conjugates were associated with cell fractions. Bio-distribution studies in nude mice bearing human KB xenografts demonstrated a significant tumor uptake and favorable bio-distribution profile for ⁶⁴Cu-NOTA- and ⁶⁴Cu-NOTAM-folate conjugate. The uptake in the tumors was blocked by the excess injection of folic acid, suggesting a receptor-mediated process. Conclusion: These results demonstrate that the ⁶⁴Cu-NOTAM-folate conjugate may be useful as a molecular probe for the detection and staging of folate receptor-positive cancers, such as ovarian cancer and their metastasis, as well as monitoring tumor response to treatment.Keywords: folate, receptor, tumor imaging, ⁶⁴Cu-NOTA-folate, PET
Procedia PDF Downloads 108655 Robust Method for Evaluation of Catchment Response to Rainfall Variations Using Vegetation Indices and Surface Temperature
Authors: Revalin Herdianto
Abstract:
Recent climate changes increase uncertainties in vegetation conditions such as health and biomass globally and locally. The detection is, however, difficult due to the spatial and temporal scale of vegetation coverage. Due to unique vegetation response to its environmental conditions such as water availability, the interplay between vegetation dynamics and hydrologic conditions leave a signature in their feedback relationship. Vegetation indices (VI) depict vegetation biomass and photosynthetic capacity that indicate vegetation dynamics as a response to variables including hydrologic conditions and microclimate factors such as rainfall characteristics and land surface temperature (LST). It is hypothesized that the signature may be depicted by VI in its relationship with other variables. To study this signature, several catchments in Asia, Australia, and Indonesia were analysed to assess the variations in hydrologic characteristics with vegetation types. Methods used in this study includes geographic identification and pixel marking for studied catchments, analysing time series of VI and LST of the marked pixels, smoothing technique using Savitzky-Golay filter, which is effective for large area and extensive data. Time series of VI, LST, and rainfall from satellite and ground stations coupled with digital elevation models were analysed and presented. This study found that the hydrologic response of vegetation to rainfall variations may be shown in one hydrologic year, in which a drought event can be detected a year later as a suppressed growth. However, an annual rainfall of above average do not promote growth above average as shown by VI. This technique is found to be a robust and tractable approach for assessing catchment dynamics in changing climates.Keywords: vegetation indices, land surface temperature, vegetation dynamics, catchment
Procedia PDF Downloads 287654 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 281653 Use of RAPD and ISSR Markers in Detection of Genetic Variation among Colletotrichum falcatum Went Isolates from South Gujarat India
Authors: Prittesh Patel, Rushabh Shah, Krishnamurthy Ramar, Vakulbhushan Bhaskar
Abstract:
The present research work aims at finding genetic differences in the genomes of sugarcane red rot isolates Colletotrichum falcatum Went using Random Amplified Polymorphic DNA (RAPD) and interspersed simple sequence repeat (ISSR) molecular markers. Ten isolates of C. falcatum isolated from different red rot infected sugarcane cultivars stalk were used in present study. The amplified bands were scored across the lanes obtained in 15 RAPD primes and 21 ISSR primes successfully. The data were analysed using NTSYSpc 2.2 software. The results showed 80.6% and 68.07% polymorphism in RPAD and ISSR analysis respectively. Based on the RAPD analysis, ten genotypes were grouped into two major clusters at a cut-off value of 0.75. Geographically distant C. falcatum isolate cfGAN from south Gujarat had a level of similarity with Coimbatore isolate cf8436 presented on separate clade of bootstrapped dendrograms. First and second cluster consisted of five and three isolates respectively, indicating the close relation among them. The 21 ISSR primers produced 119 distinct and scorable loci in that 38 were monomorphic. The number of scorable loci for each primer varied from 2 (ISSR822) to 8 (ISSR807, ISSR823 and ISSR15) with an average of 5.66 loci per primer. Primer ISSR835 amplified the highest number of bands (57), while only 16 bands were obtained by primers ISSR822. Four primers namely ISSR830, ISSR845, ISSR4 and ISSR15 showed the highest value of percentage of polymorphism (100%). The results indicated that both of the marker systems RAPD and ISSR, individually can be effectively used in determination of genetic relationship among C falcatum accessions collected from different parts of south Gujarat.Keywords: Colletotrichum falcatum, ISSR, RAPD, Red Rot
Procedia PDF Downloads 361652 Impact of Mammographic Screening on Ethnic Inequalities in Breast Cancer Stage at Diagnosis and Survival in New Zealand
Authors: Sanjeewa Seneviratne, Ian Campbell, Nina Scott, Ross Lawrenson
Abstract:
Introduction: Indigenous Māori women experience a 60% higher breast cancer mortality rate compared with European women in New Zealand. We explored the impact of difference in the rate of screen detected breast cancer between Māori and European women on more advanced disease at diagnosis and lower survival in Māori women. Methods: All primary in-situ and invasive breast cancers diagnosed in screening age women (as defined by the New Zealand National Breast Cancer Screening Programme) between 1999 and 2012 in the Waikato area were identified from the Waikato Breast Cancer Register and the national screening database. Association between screen versus non-screen detection and cancer stage at diagnosis and survival were compared by ethnicity and socioeconomic deprivation. Results: Māori women had 50% higher odds of being diagnosed with more advance staged cancer compared with NZ European women, a half of which was explained by the lower rate of screen detected cancer in Māori women. Significantly lower breast cancer survival rates were observed for Māori compared with NZ European and most deprived compared with most affluent socioeconomic groups for symptomatically detected breast cancer. No significant survival differences by ethnicity or socioeconomic deprivation were observed for screen detected breast cancer. Conclusions: Low rate of screen detected breast cancer appears to be a major contributor for more advanced stage disease at diagnosis and lower breast cancer survival in Māori compared with NZ European women. Increasing screening participation for Māori has the potential to substantially reduce breast cancer mortality inequity between Māori and NZ European women.Keywords: breast cancer, screening, ethnicity, inequity
Procedia PDF Downloads 514651 Numerical Study of Nonlinear Guided Waves in Composite Laminates with Delaminations
Authors: Reza Soleimanpour, Ching Tai Ng
Abstract:
Fibre-composites are widely used in various structures due to their attractive properties such as higher stiffness to mass ratio and better corrosion resistance compared to metallic materials. However, one serious weakness of this composite material is delamination, which is a subsurface separation of laminae. A low level of this barely visible damage can cause a significant reduction in residual compressive strength. In the last decade, the application of guided waves for damage detection has been a topic of significant interest for many researches. Among all guided wave techniques, nonlinear guided wave has shown outstanding sensitivity and capability for detecting different types of damages, e.g. cracks and delaminations. So far, most of researches on applications of nonlinear guided wave have been dedicated to isotropic material, such as aluminium and steel, while only a few works have been done on applications of nonlinear characteristics of guided waves in anisotropic materials. This study investigates the nonlinear interactions of the fundamental antisymmetric lamb wave (A0) with delamination in composite laminates using three-dimensional (3D) explicit finite element (FE) simulations. The nonlinearity considered in this study arises from interactions of two interfaces of sub-laminates at the delamination region, which generates contact acoustic nonlinearity (CAN). The aim of this research is to investigate the phenomena of CAN in composite laminated beams by a series of numerical case studies. In this study interaction of fundamental antisymmetric lamb wave with delamination of different sizes are studied in detail. The results show that the A0 lamb wave interacts with the delaminations generating CAN in the form of higher harmonics, which is a good indicator for determining the existence of delaminations in composite laminates.Keywords: contact acoustic nonlinearity, delamination, fibre reinforced composite beam, finite element, nonlinear guided waves
Procedia PDF Downloads 204650 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 227649 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 181