Search results for: pixel visualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 762

Search results for: pixel visualization

252 Deposition of Size Segregated Particulate Matter in Human Respiratory Tract and Their Health Effects in Glass City Residents

Authors: Kalpana Rajouriya, Ajay Taneja

Abstract:

Particulates are ubiquitous in the air environment and cause serious threats to human beings, such as lung cancer, COPD, and Asthma. Particulates mainly arise from industrial effluent, vehicular emission, and other anthropogenic activities. In the glass industrial city Firozabad, real-time monitoring of size segregated Particulate Matter (PM) and black carbon was done by Aerosol Black Carbon Detector (ABCD) and GRIMM portable aerosol Spectrometer at two different sites in which one site is urban and another is rural. The average mass concentration of size segregated PM during the study period (March & April 2022) was recorded as PM10 (223.73 g/m⁻³), PM5.0 (44.955 g/m⁻³), PM2.5 (59.275 g/m⁻³), PM1.0 (33.02 g/m⁻³), PM0.5 (2.05 g/m⁻³), and PM0.25 (2.99 g/m⁻³). The highest concentration of BC was found in Urban due to the emissions from diesel engines and wood burning, while NO2 was highest at the rural sites. The average concentrations of PM10 (6.08 and 2.73 times) PM2.5 exceeded the NAAQS and WHO guidelines. Particulate Matter deposition and health risk assessment was done by MPPD and USEPA model to know about the particulate matter toxicity in industrial residents. Health risk assessment results showed that Children are most likely to be affected by exposure of PM10 and PM2.5 and may have various non-carcinogenic and carcinogenic diseases. Deposition results inferred that the sensitive exposed population, especially 9 years old children, have high PM deposition as well as visualization and may be at risk of developing health-related problems from exposure to size-segregated PM. They will be discussed during presentation.

Keywords: particulate matter, black carbon, NO2, deposition of PM, health risk

Procedia PDF Downloads 44
251 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events

Procedia PDF Downloads 348
250 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 21
249 Navigating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management

Authors: Derrick Mirindi, Frederic Mirindi, Oluwakemi Oshineye

Abstract:

The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.

Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software

Procedia PDF Downloads 32
248 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin

Abstract:

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Keywords: day night band, SAR, fishery, South China Sea

Procedia PDF Downloads 214
247 Immobilized Iron Oxide Nanoparticles for Stem Cell Reconstruction in Magnetic Particle Imaging

Authors: Kolja Them, Johannes Salamon, Harald Ittrich, Michael Kaul, Tobias Knopp

Abstract:

Superparamagnetic iron oxide nanoparticles (SPIONs) are nanoscale magnets which can be biologically functionalized for biomedical applications. Stem cell therapies to repair damaged tissue, magnetic fluid hyperthermia for cancer therapy and targeted drug delivery based on SPIONs are prominent examples where the visualization of a preferably low concentrated SPION distribution is essential. In 2005 a new method for tomographic SPION imaging has been introduced. The method named magnetic particle imaging (MPI) takes advantage of the nanoparticles magnetization change caused by an oscillating, external magnetic field and allows to directly image the time-dependent nanoparticle distribution. The SPION magnetization can be changed by the electron spin dynamics as well as by a mechanical rotation of the nanoparticle. In this work different calibration methods in MPI are investigated for image reconstruction of magnetically labeled stem cells. It is shown that a calibration using rotationally immobilized SPIONs provides a higher quality of stem cell images with fewer artifacts than a calibration using mobile SPIONs. The enhancement of the image quality and the reduction of artifacts enables the localization and identification of a smaller number of magnetically labeled stem cells. This is important for future medical applications where low concentrations of functionalized SPIONs interacting with biological matter have to be localized.

Keywords: biomedical imaging, iron oxide nanoparticles, magnetic particle imaging, stem cell imaging

Procedia PDF Downloads 442
246 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 30
245 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization

Procedia PDF Downloads 108
244 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 167
243 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation

Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim

Abstract:

Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.

Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time

Procedia PDF Downloads 51
242 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education

Procedia PDF Downloads 90
241 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia

Authors: Nguyen-Thanh Son

Abstract:

Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.

Keywords: MODIS, flood, mapping, Cambodia

Procedia PDF Downloads 103
240 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 437
239 The Use of Complementary and Alternative Medicine for Pain Relief in the Elderly: An Investigational Analysis of Seniors Residing in an Independent/Assisted Seniors’ Living Facility

Authors: Carol Cameletti

Abstract:

The goal of this study was to perform a pilot survey to assess pain frequency and intensity in an elderly population and to assess treatment options for chronic pain that include complementary and alternative medicines (CAM). Ten participants were recruited from an independent and supportive living housing facility in Northern Ontario and asked to complete two questionnaires: 1) a self-assessment on pain, and 2) the use of CAM for pain. Results from our study show that 80% of the participants experienced pains other than the regular everyday pains such as minor headaches, sprains or toothaches. Although participants stated that on average the highest level of pain they experienced within the past 24 hours had a score of 6.5 (0=no pain, 10=worst pain imaginable) the level of pain they experienced moderately interfered with their daily activities. Unfortunately, participants stated that they were only able to attain minimal levels of pain relief using treatments or medications causing some of the participants to seek alternative therapies or self-help practices. The most commonly used CAMs were vitamins/minerals, herbs and supplements, and self-help practices such as meditation, prayer, visualization and relaxation techniques. Although some of the participants stated that they had received complementary treatments directly from their physician, four of the nine participants said that they had not disclosed CAM use to their physician thereby indicating a need to open the lines of communication between healthcare providers and patients with regards to CAM use. It is our hope that the data generated from this study will serve as the platform for a pain management clinic that is client-centered, consumer-driven and truly integrative and tailored in order to meet the unique needs of older adults in Great Sudbury, Ontario.

Keywords: alternative, complementary, elderly, medicine

Procedia PDF Downloads 156
238 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization

Procedia PDF Downloads 246
237 The Superiority of 18F-Sodium Fluoride PET/CT for Detecting Bone Metastases in Comparison with Other Bone Diagnostic Imaging Modalities

Authors: Mojtaba Mirmontazemi, Habibollah Dadgar

Abstract:

Bone is the most common metastasis site in some advanced malignancies, such as prostate and breast cancer. Bone metastasis generally indicates fewer prognostic factors in these patients. Different radiological and molecular imaging modalities are used for detecting bone lesions. Molecular imaging including computed tomography, magnetic resonance imaging, planar bone scintigraphy, single-photon emission tomography, and positron emission tomography as noninvasive visualization of the biological occurrences has the potential to exact examination, characterization, risk stratification and comprehension of human being diseases. Also, it is potent to straightly visualize targets, specify clearly cellular pathways and provide precision medicine for molecular targeted therapies. These advantages contribute implement personalized treatment for each patient. Currently, NaF PET/CT has significantly replaced standard bone scintigraphy for the detection of bone metastases. On one hand, 68Ga-PSMA PET/CT has gained high attention for accurate staging of primary prostate cancer and restaging after biochemical recurrence. On the other hand, FDG PET/CT is not commonly used in osseous metastases of prostate and breast cancer as well as its usage is limited to staging patients with aggressive primary tumors or localizing the site of disease. In this article, we examine current studies about FDG, NaF, and PSMA PET/CT images in bone metastases diagnostic utility and assess response to treatment in patients with breast and prostate cancer.

Keywords: skeletal metastases, fluorodeoxyglucose, sodium fluoride, molecular imaging, precision medicine, prostate cancer (68Ga-PSMA-11)

Procedia PDF Downloads 87
236 Effectiveness and Efficiency of Unified Philippines Accident Reporting and Database System in Optimizing Road Crash Data Usage with Various Stakeholders

Authors: Farhad Arian Far, Anjanette Q. Eleazar, Francis Aldrine A. Uy, Mary Joyce Anne V. Uy

Abstract:

The Unified Philippine Accident Reporting and Database System (UPARDS), is a newly developed system by Dr. Francis Aldrine Uy of the Mapua Institute of Technology. The main purpose is to provide an advanced road accident investigation tool, record keeping and analysis system for stakeholders such as Philippine National Police (PNP), Metro Manila Development Authority (MMDA), Department of Public Works and Highways (DPWH), Department of Health (DOH), and insurance companies. The system is composed of 2 components, the mobile application for road accident investigators that takes advantage of available technology to advance data gathering and the web application that integrates all accident data for the use of all stakeholders. The researchers with the cooperation of PNP’s Vehicle Traffic Investigation Sector of the City of Manila, conducted the field-testing of the application in fifteen (15) accident cases. Simultaneously, the researchers also distributed surveys to PNP, Manila Doctors Hospital, and Charter Ping An Insurance Company to gather their insights regarding the web application. The survey was designed on information systems theory called Technology Acceptance Model. The results of the surveys revealed that the respondents were greatly satisfied with the visualization and functions of the applications as it proved to be effective and far more efficient in comparison with the conventional pen-and-paper method. In conclusion, the pilot study was able to address the need for improvement of the current system.

Keywords: accident, database, investigation, mobile application, pilot testing

Procedia PDF Downloads 411
235 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems

Authors: Luis Guiliana, Andrea Osorio

Abstract:

The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.

Keywords: geomechanics, MEM, drilling, stress

Procedia PDF Downloads 251
234 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 103
233 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks

Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft

Abstract:

Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: autonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 365
232 Positive Effect of Manipulated Virtual Kinematic Intervention in Individuals with Traumatic Stiff Shoulder: Pilot Study

Authors: Isabella Schwartz, Ori Safran, Naama Karniel, Michal Abel, Adina Berko, Martin Seyres, Tamir Tsoar, Sigal Portnoy

Abstract:

Virtual Reality allows to manipulate the patient’s perception, thereby providing a motivational addition to real-time biofeedback exercises. We aimed to test the effect of manipulated virtual kinematic intervention on measures of active and passive Range of Motion (ROM), pain, and disability level in individuals with traumatic stiff shoulder. In a double-blinded study, patients with stiff shoulder following proximal humerus fracture and non-operative treatment were randomly divided into a non-manipulated feedback group (NM-group; N=6) and a manipulated feedback group (M-group; N=7). The shoulder ROM, pain, and the Disabilities of the Arm, Shoulder and Hand (DASH) scores were tested at baseline and after the 6 sessions, during which the subjects performed shoulder flexion and abduction in front of a graphic visualization of the shoulder angle. The biofeedback provided to the NM-group was the actual shoulder angle and the feedback provided to the M-group was manipulated so that 10° were constantly subtracted from the actual angle detected by the motion capture system. The M-group showed greater improvement in the active flexion ROM, with median and interquartile range of 197.1 (140.5-425.0) compared to 142.5 (139.1-151.3) for the NM-group (p=.046). Also, the M-group showed greater improvement in the DASH scores, with median and interquartile range of 67.7 (52.8-86.2) compared to 89.7 (83.8-98.3) for the NM-group (p=.022). Manipulated intervention is beneficial in individuals with traumatic stiff shoulder and should be further tested for other populations with orthopedic injuries.

Keywords: virtual reality, biofeedback, shoulder pain, range of motion

Procedia PDF Downloads 98
231 Satellite Based Assessment of Urban Heat Island Effects on Major Cities of Pakistan

Authors: Saad Bin Ismail, Muhammad Ateeq Qureshi, Rao Muhammad Zahid Khalil

Abstract:

In the last few decades, urbanization worldwide has been sprawled manifold, which is denunciated in the growth of urban infrastructure and transportation. Urban Heat Island (UHI) can induce deterioration of the living environment, disabilities, and rises in energy usages. In this study, the prevalence/presence of Surface Urban Heat Island (SUHI) effect in major cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Karachi, Quetta, and Peshawar has been investigated. Landsat and SPOT satellite images were acquired for the assessment of urban sprawl. MODIS Land Surface Temperature product MOD11A2 was acquired between 1000-1200 hours (local time) for assessment of urban heat island. The results of urban sprawl informed that the extent of Islamabad and Rawalpindi urban area increased from 240 km2 to 624 km2 between 2000 and 2016, accounted 24 km2 per year, Lahore 29 km2, accounted 1.6 km2 per year, Karachi 261 km2, accounted for 16 km2/ per year, Peshawar 63 km2, accounted 4 km2/per year, and Quetta 76 km2/per year, accounted 5 km2/per year approximately. The average Surface Urban Heat Island (SUHI) magnitude is observed at a scale of 0.63 ᵒC for Islamabad and Rawalpindi, 1.25 ᵒC for Lahore, and 1.16 ᵒC for Karachi, which is 0.89 ᵒC for Quetta, and 1.08 ᵒC for Peshawar from 2000 to 2016. The pixel-based maximum SUHI intensity reaches up to about 11.40 ᵒC for Islamabad and Rawalpindi, 15.66 ᵒC for Lahore, 11.20 ᵒC for Karachi, 14.61 ᵒC for Quetta, and 15.22 ᵒC for Peshawar from the baseline of zero degrees Centigrade (ᵒC). The overall trend of SUHI in planned cities (e.g., Islamabad) is not found to increase significantly. Spatial and temporal patterns of SUHI for selected cities reveal heterogeneity and a unique pattern for each city. It is well recognized that SUHI intensity is modulated by land use/land cover patterns (due to their different surface properties and cooling rates), meteorological conditions, and anthropogenic activities. The study concluded that the selected cities (Islamabad, Rawalpindi, Lahore, Karachi, Quetta, and Peshawar) are examples where dense urban pockets observed about 15 ᵒC warmer than a nearby rural area.

Keywords: urban heat island , surface urban heat island , urbanization, anthropogenic source

Procedia PDF Downloads 297
230 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 250
229 Research Progress of the Relationship between Urban Rail Transit and Residents' Travel Behavior during 1999-2019: A Scientific Knowledge Mapping Based on Citespace and Vosviewer

Authors: Zheng Yi

Abstract:

Among the attempts made worldwide to foster urban and transport sustainability, transit-oriented development certainly is one of the most successful. Residents' travel behavior is a concern in the researches about the impacts of transit-oriented development. The study takes 620 English journal papers in the core collection database of Web of Science as the study objects; the paper tries to map out the scientific knowledge mapping in the field and draw the basic conditions by co-citation analysis, co-word analysis, a total of citation network analysis and visualization techniques. This study teases out the research hotspots and evolution of the relationship between urban rail transit and resident's travel behavior from 1999 to 2019. According to the results of the analysis of the time-zone view and burst-detection, the paper discusses the trend of the next stage of international study. The results show that in the past 20 years, the research focuses on these keywords: land use, behavior, model, built environment, impact, travel behavior, walking, physical activity, smart card, big data, simulation, perception. According to different research contents, the key literature is further divided into these topics: the attributes of the built environment, land use, transportation network, transportation policies. The results of this paper can help to understand the related researches and achievements systematically. These results can also provide a reference for identifying the main challenges that relevant researches need to address in the future.

Keywords: urban rail transit, travel behavior, knowledge map, evolution of researches

Procedia PDF Downloads 94
228 The Weavability of Waste Plants and Their Application in Fashion and Textile Design

Authors: Jichi Wu

Abstract:

The dwindling of resources requires a more sustainable design. New technology could bring new materials and processing techniques to the fashion industry and push it to a more sustainable future. Thus this paper explores cutting-edge researches on the life-cycle of closed-loop products and aims to find innovative ways to recycle and upcycle. For such a goal, the author investigated how low utilization plants and leftover fiber could be turned into ecological textiles in fashion. Through examining the physical and chemical properties (cellulose content/ fiber form) of ecological textiles to explore their wearability, this paper analyzed the prospect of bio-fabrics (weavable plants) in body-oriented fashion design and their potential in sustainable fashion and textile design. By extracting cellulose from 9 different types or sections of plants, the author intends to find an appropriate method (such as ion solution extraction) to mostly increase the weavability of plants, so raw materials could be more effectively changed into fabrics. All first-hand experiment data were carefully collected and then analyzed under the guidance of related theories. The result of the analysis was recorded in detail and presented in an understandable way. Various research methods are adopted through this project, including field trip and experiments to make comparisons and recycle materials. Cross-discipline cooperation is also conducted for related knowledge and theories. From this, experiment data will be collected, analyzed, and interpreted into a description and visualization results. Based on the above conclusions, it is possible to apply weavable plant fibres to develop new textile and fashion.

Keywords: wearable bio-textile, sustainability, economy, ecology, technology, weavability, fashion design

Procedia PDF Downloads 116
227 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: data security, flow cytometry, leukaemia, telematics platform, telemedicine

Procedia PDF Downloads 948
226 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 320
225 Applied Spatial Mapping and Monitoring of Illegal Landfills for Deprived Urban Areas in Romania

Authors: Șercăianu Mihai, Aldea Mihaela, Iacoboaea Cristina, Luca Oana, Nenciu Ioana

Abstract:

The rise and mitigation of unauthorized illegal waste dumps are a significant global issue within waste management ecosystems, impacting disadvantaged communities. Globally, including in Romania, many individuals live in houses without legal recognition, lacking ownership or construction permits, in areas known as "informal settlements." An increasing number of regions and cities in Romania are struggling to manage their illegal waste dumps, especially in the context of increasing poverty and lack of regulation related to informal settlements. One such informal settlement is located at the end of Bistra Street in Câlnic, within the Reșița Municipality of Caras Severin County. The article presents a case study that focuses on employing remote sensing techniques and spatial data to monitor and map illegal waste practices, with subsequent integration into a geographic information system tailored for the Reșița community. In addition, the paper outlines the steps involved in devising strategies aimed at enhancing waste management practices in disadvantaged areas, aligning with the shift toward a circular economy. Results presented in the paper contain a spatial mapping and visualization methodology calibrated with in situ data collection applicable for identifying illegal landfills. The emergence and neutralization of illegal dumps pose a challenge in the field of waste management. These approaches, which prove effective where conventional solutions have failed, need to be replicated and adopted more wisely.

Keywords: informal settlements, GIS, waste dumps, waste management, monitoring

Procedia PDF Downloads 33
224 Using Game Engines in Lightning Shielding: The Application of the Rolling Spheres Method on Virtual As-Built Power Substations

Authors: Yuri A. Gruber, Matheus Rosendo, Ulisses G. A. Casemiro, Klaus de Geus, Rafael T. Bee

Abstract:

Lightning strikes can cause severe negative impacts to the electrical sector causing direct damage to equipment as well as shutdowns, especially when occurring in power substations. In order to mitigate this problem, a meticulous planning of the power substation protection system is of vital importance. A critical part of this is the distribution of shielding wires through the substation, which creates a 3D imaginary protection mesh similar to a circus tarpaulin. Equipment enclosed in the volume defined by that 3D mesh is considered protected against lightning strikes. The use of traditional methods of longitudinal cutting analysis based on 2D CAD tools makes the process laborious and the results obtained may not guarantee satisfactory protection of electrical equipment. This work describes the application of a Game Engine to the problem of lightning protection of power substations providing the visualization of the 3D protection mesh, the amount of protected components and the highlight of equipment which remain unprotected. In addition, aspects regarding the implementation and the advantages of approaching the problem using Unreal® Engine 4 are described. In order to validate results, a comparison with traditional 2D methods is applied to the same case study to which the proposed technique has been applied. Finally, a comparative study involving different levels of protection using the technique developed in this work is presented, showing that modern game engines can be a powerful accessory for simulations in several areas of engineering.

Keywords: game engine, rolling spheres method, substation protection, UE4, Unreal Engine 4

Procedia PDF Downloads 505
223 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction

Authors: Yumeng Ma, Fang Wang, Jinxia Huang

Abstract:

Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.

Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base

Procedia PDF Downloads 135