Search results for: X-ray images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2339

Search results for: X-ray images

689 Synthesis and Characterization of an Aerogel Based on Graphene Oxide and Polyethylene Glycol

Authors: Javiera Poblete, Fernando Gajardo, Katherina Fernandez

Abstract:

Graphene, and its derivatives such as graphene oxide (GO), are emerging nanoscopic materials, with interesting physical and chemical properties. From them, it is possible to develop three-dimensional macrostructures, such as aerogels, which are characterized by a low density, high porosity, and large surface area, having a promising structure for the development of materials. The use of GO as a precursor of these structures provides a wide variety of materials, which can be developed as a result of the functionalization of their oxygenated groups, with specific compounds such as polyethylene glycol (PEG). The synthesis of aerogels of GO-PEG for non-covalent interactions has not yet been widely reported, being of interest due to its feasible escalation and economic viability. Thus, this work aims to develop a non-covalently functionalized GO-PEG aerogels and characterize them physicochemically. In order to get this, the GO was synthesized from the modified hummers method and it was functionalized with the PEG by polymer-assisted GO gelation (crosslinker). The gelation was obtained for GO solutions (10 mg/mL) with the incorporation of PEG in different proportions by weight. The hydrogel resulting from the reaction was subsequently lyophilized, to obtain the respective aerogel. The material obtained was chemically characterized by analysis of Fourier transform infrared spectroscopy (FTIR), Raman spectroscopy and X-ray diffraction (XRD), and its morphology by scanning electron microscopy (SEM) images; as well as water absorption tests. The results obtained showed the formation of a non-covalent aerogel (FTIR), whose structure was highly porous (SEM) and with a water absorption values greater than 50% g/g. Thus, a methodology of synthesis for GO-PEG was developed and validated.

Keywords: aerogel, graphene oxide, polyethylene glycol, synthesis

Procedia PDF Downloads 101
688 Characterization of Femur Development in Mice: A Computational Approach

Authors: Moncayo Donoso Miguelangel, Guevara Morales Johana, Kalenia Flores Kalenia, Barrera Avellaneda Luis Alejandro, Garzon Alvarado Diego Alexander

Abstract:

In mammals, long bones are formed by ossification of a cartilage mold during early embryonic development, forming structures called secondary ossification centers (SOCs), a primary ossification center (POC) and growth plates. This last structure is responsible for long bone growth. During the femur growth, the morphology of the growth plate and the SOCs may vary during different developmental stages. So far there are no detailed morphological studies of the development process from embryonic to adult stages. In this work, we carried out a morphological characterization of femur development from embryonic period to adulthood in mice. 15, 17 and 19 days old embryos and 1, 7, 14, 35, 46 and 52 days old mice were used. Samples were analyzed by a computational approach, using 3D images obtained by micro-CT imaging. Results obtained in this study showed that femur, its growth plates and SOCs undergo morphological changes during different stages of development, including changes in shape, position and thickness. These variations may be related with a response to mechanical loads imposed for muscle development surrounding the femur and a high activity during early stages necessary to support the high growth rates during first weeks and years of development. This study is important to improve our knowledge about the ossification patterns on every stage of bone development and characterize the morphological changes of important structures in bone growth like SOCs and growth plates.

Keywords: development, femur, growth plate, mice

Procedia PDF Downloads 321
687 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: augmented reality framework, server-client model, vision-based tracking, image search

Procedia PDF Downloads 265
686 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis

Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García

Abstract:

Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.

Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis

Procedia PDF Downloads 208
685 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea

Authors: L. I. Izhar, T. Stathaki, K. Howell

Abstract:

Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.

Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging

Procedia PDF Downloads 294
684 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems

Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr

Abstract:

Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.

Keywords: gas lift instability, bubbles forming, bubbles collapsing, image processing

Procedia PDF Downloads 403
683 Assessment of the Adoption and Distribution Pattern of Agroforestry in Faisalabad District Using GIS

Authors: Irfan Ahmad, Raza Ghafoor, Hammad Raza Ahmad, Muhammad Asif, Farrakh Nawaz, M. Tahir Siddiqui

Abstract:

Due to the exploding population of Pakistan the pressure on natural forests is increasing to meet the demands of wood and wood based products. Agroforestry is being practiced throughout the world on scientific basis but unfortunately the farmers of Pakistan are reluctant in its adoption. The presents study was designed to assess the adoption of agroforestry practices in Faisalabad with respect to land holdings of farmers and future suitability by using Geographic information system (GIS). Faisalabad is the third largest city of the country and is famous due to the textile industry. A comprehensive survey from target villages of the Lyallpur town of Faisalabad district was carried out. Out of total 65 villages, 40 were selected for study. From each selected village, one farmer who was actively engaged in farming activities was selected. It was observed that medium sized farmers having 10-20 acre were more in number as compared to small and large farmers. Number of trees was found maximum in large farm lands, ratio of diseased trees was almost similar in all categories with maximum in small farmlands (24.1%). Regarding the future prospects 35% farmer were interested in agroforestry practices 65% were not interested in the promotion of trees due to the non-availability of technical guidance and proper markets. Geographic images of the study site can further help the researchers and policy makers in the promotion of agroforestry.

Keywords: agroforestry trends, adoption, Faisalabad, geographic information system (GIS)

Procedia PDF Downloads 483
682 Advancements in Dielectric Materials: A Comprehensive Study on Properties, Synthesis, and Applications

Authors: M. Mesrar, T. Lamcharfi, Nor-S. Echatoui, F. Abdi

Abstract:

The solid-state reaction method was used to synthesize ferroelectric systems with lead-free properties, specifically (1-x-y)(Na₀.₅Bi₀.₅)TiO₃-xBaTiO₃-y(K₀.₅ Bi₀.₅)TiO₃. To achieve a pure perovskite phase, the optimal calcination temperature was determined to be 1000°C for 4 hours. X-ray diffraction (XRD) analysis identified the presence of the morphotropic phase boundary (MPB) in the (1-x-y)NBT xBT-yKBT ceramics for specific molar compositions, namely (0.95NBT-0.05BT, 0.84NBT-0.16KBT, and 0.79NBT-0.05BT-0.16KBT). To enhance densification, the sintering temperature was set at 1100°C for 4 hours. Scanning electron microscopy (SEM) images exhibited homogeneous distribution and dense packing of the grains in the ceramics, indicating a uniform microstructure. These materials exhibited favorable characteristics, including high dielectric permittivity, low dielectric loss, and diffused phase transition behavior. The ceramics composed of 0.79NBT-0.05BT-0.16KBT exhibited the highest piezoelectric constant (d33=148 pC/N) and electromechanical coupling factor (kp = 0.292) among all compositions studied. This enhancement in piezoelectric properties can be attributed to the presence of the morphotropic phase boundary (MPB) in the material. This study presents a comprehensive approach to improving the performance of lead-free ferroelectric systems of composition 0.79(Na₀.₅Bi₀.₅)Ti O₃-0.05BaTiO₃-0.16(K₀.₅Bi₀.₅)TiO₃.

Keywords: solid-state method, (1-x-y)NBT-xBT-yKBT, morphotropic phase boundary, Raman spectroscopy, dielectric properties

Procedia PDF Downloads 36
681 Cost Effective Intraoperative Mri for Cranial and Spinal Cases Using Pre-Existing Three Side Open Mri-Adjacent to Operation Theater = Since-2005

Authors: V. K. Tewari, M. Hussain, H. K. D.Gupta

Abstract:

Aims/Background: The existing Intraoperative-MRI(IMRI) of developed countries is too costly to be utilized in any developing country. We have used the preexisting 3-side open 0.2-tesla MRI for IMRI in India so that the maximum benefit of the goal of IMRI is attained with cost effective state of the art surgeries. Material/Methods: We have operated 36-cases since 13thNov2005 via IMRI to till date. The table of MRI is used as an operating table which can be taken to the P3 level and as and when we require MRI to be done then the table can slide to P1 level so that the intraoperative monitoring can be done. The oxygen/nitrous tubes were taken out from vent made in the wall of the MRI room to outside. The small handy Boyel’s trolley was taken inside the MRI room with a small monitor. Anesthesia is been given in the MRI room itself. Usual skin markings were given with the help of scout MRI fields so the preciseness is increased. Craniotomy flap raised or the laminectomy and the dura opened in the similar fashion by same instruments as for the non IMRI case. Now corticectomy is planned after the T1 contrast image to localize and minimize the cortical resection. Staged and multiple P3 to P1 position and vice versa is planned respectively so that the resection is optimized to around 0.5 mm for radiotherapy. Immediate preclosure hematoma and edemas can be differentiated and cared for it. Results: Same MRI images as compared to highly expensive MRI of western world are achieved. Conclusion: 0.2 tesla Intraoperative MRI can be used for operative work for cranial and spinal cases easily with highly cost effectiveness.

Keywords: intraoperative MRI, 0.2 tesla intraoperative MRI, cost effective intraoperative MRI, medical and health sciences

Procedia PDF Downloads 432
680 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric

Authors: Geetika Barman, B. S. Daya Sagar

Abstract:

In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.

Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology

Procedia PDF Downloads 66
679 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 107
678 From Lack of Humanity to Self-Consciousness and Vision in Lord of the Flies and Blindness

Authors: Maryam Sadeghi

Abstract:

Civilization and industrialization are two important factors that make people believe they are just depriving of savagery and brutality. But practical studies show exactly something different. How groups of people behave, when they are put in extreme situations is the very unpleasant truth about the human being in general. Both novels deal with the fragility of human society, no matter the people who are playing a role are children or grown-ups, who by definition should know better. Both novels have got beautiful plots in which no one enforces rules and laws on the characters, so they begin to show their true nature. The present study is undertaken to investigate the process of a journey from lack of humanity to a sort of self-consciousness which happens at the end of both Blindness by Saramago and Lord of the Flies by Golding. In order to get the best result the two novels have been studied precisely and lots of different articles and critical essays have been analyzed, which shows people drift into cruelty and savagery easily but can also drift out of it. In blindness losing sight, and being apart from society in a deserted tropical island in Lord of the Flies causes limitation. Limitation in any form makes people rebel. Although in the process of both novels, any kind of savagery, brutality, filth, and social collapse can be observable and both writers believe that human being has the potential of being animal images, but they both also want to show that the very nature of human being is divine. Children’s weeping at the end Lord of the Flies and Doctor’s remark at the end of Blindness “I don’t think we did go blind, I think we are blind, blind but seeing, blind people who can see but do not see”, show exactly the matter of insight at the end of the novels. The fact that divinity exists in the very nature of human being is the indubitable aim that makes this research truly valuable.

Keywords: brutality, lack of humanity, savagery, Blindness

Procedia PDF Downloads 353
677 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment

Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen

Abstract:

The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.

Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome

Procedia PDF Downloads 160
676 In-Situ Fabrication of ZnO PES Membranes for Treatment of Pharmaceuticals

Authors: Oranso T. Mahlangi, Bhekie B. Mamba

Abstract:

The occurrence of trace organic compounds (TOrCs) in water has raised health concerns for living organisms. The majority of TorCs, including pharmaceuticals and volatile organic compounds, are poorly monitored, partly due to the high cost of analysis and less strict water quality guidelines in South Africa. Therefore, the removal of TorCs is important to guarantee safe potable water. In this study, ZnO nanoparticles were fabricated in situ in polyethersulfone (PES) polymer solutions. This was followed by membrane synthesis using the phase inversion technique. Techniques such as FTIR, Raman, SEM, AFM, EDS, and contact angle measurements were used to characterize the membranes for several physicochemical properties. The membranes were then evaluated for their efficiency in treating pharmaceutical wastewater and resistance to organic (sodium alginate) and protein (bovine serum albumin) fouling. EDS micrographs revealed uniform distribution of ZnO nanoparticles within the polymer matrix, while SEM images showed uniform fingerlike structures. The addition of ZnO increased membrane roughness as well as hydrophilicity (which in turn improved water fluxes). The membranes poorly rejected monovalent and divalent salts (< 10%), making them resistant to flux decline due to concentration polarization effects. However, the membranes effectively removed carbamazepine, caffeine, sulfamethoxazole, ibuprofen, and naproxen by over 50%. ZnO PES membranes were resistant to organic and protein fouling compared to the neat membrane. ZnO PES ultrafiltration membranes may provide a solution in the reclamation of wastewater.

Keywords: trace organic compounds, pharmaceuticals, membrane fouling, wastewater reclamation

Procedia PDF Downloads 123
675 Future Projection of Glacial Lake Outburst Floods Hazard: A Hydrodynamic Study of the Highest Lake in the Dhauliganga Basin, Uttarakhand

Authors: Ashim Sattar, Ajanta Goswami, Anil V. Kulkarni

Abstract:

Glacial lake outburst floods (GLOF) highly contributes to mountain hazards in the Himalaya. Over the past decade, high altitude lakes in the Himalaya has been showing notable growth in their size and number. The key reason is rapid retreat of its glacier front. Hydrodynamic modeling GLOF using shallow water equations (SWE) would result in understanding its impact in the downstream region. The present study incorporates remote sensing based ice thickness modeling to determine the future extent of the Dhauliganga Lake to map the over deepening extent around the highest lake in the Dhauliganga basin. The maximum future volume of the lake calculated using area-volume scaling is used to model a GLOF event. The GLOF hydrograph is routed along the channel using one dimensional and two dimensional model to understand the flood wave propagation till it reaches the 1st hydropower station located 72 km downstream of the lake. The present extent of the lake calculated using SENTINEL 2 images is 0.13 km². The maximum future extent of the lake, mapped by investigating the glacier bed has a calculated scaled volume of 3.48 x 106 m³. The GLOF modeling releasing the future volume of the lake resulted in a breach hydrograph with a peak flood of 4995 m³/s at just downstream of the lake. Hydraulic routing

Keywords: GLOF, glacial lake outburst floods, mountain hazard, Central Himalaya, future projection

Procedia PDF Downloads 143
674 Introduction of Digital Radiology to Improve the Timeliness in Availability of Radiological Diagnostic Images for Trauma Care

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In an emergency department ‘where every second count for patient’s management’ timely availability of X- rays play a vital role in early diagnosis and management of patients. Trauma care centers rely heavily on timely radiologic imaging for patient care and radiology plays a crucial role in the emergency department (ED) operations. A research study was carried out to assess timeliness of availability of X-rays and total turnaround time at the Accident Service of National Hospital of Sri Lanka which is the premier trauma center in the country. Digital Radiology system was implemented as an intervention to improve the timeliness of availability of X-rays. Post-implementation assessment was carried out to assess the effectiveness of the intervention. Reduction in all three aspects of waiting times namely waiting for initial examination by doctors, waiting until X –ray is performed and waiting for image availability was observed after implementation of the intervention. However, the most significant improvement was seen in waiting time for image availability and reduction in time for image availability had indirect impact on reducing waiting time for initial examination by doctors and waiting until X –ray is performed. The most significant reduction in time for image availability was observed when performing 4-5 X rays with DR system. The least improvement in timeliness was seen in patients who are categorized as critical.

Keywords: emergency department, digital radilogy, timeliness, trauma care

Procedia PDF Downloads 245
673 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection

Authors: S. Shankar Bharathi

Abstract:

Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.

Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision

Procedia PDF Downloads 404
672 An Investigation of Surface Texturing by Ultrasonic Impingement of Micro-Particles

Authors: Nagalingam Arun Prasanth, Ahmed Syed Adnan, S. H. Yeo

Abstract:

Surface topography plays a significant role in the functional performance of engineered parts. It is important to have a control on the surface geometry and understanding on the surface details to get the desired performance. Hence, in the current research contribution, a non-contact micro-texturing technique has been explored and developed. The technique involves ultrasonic excitation of a tool as a prime source of surface texturing for aluminum alloy workpieces. The specimen surface is polished first and is then immersed in a liquid bath containing 10% weight concentration of Ti6Al4V grade 5 spherical powders. A submerged slurry jet is used to recirculate the spherical powders under the ultrasonic horn which is excited at an ultrasonic frequency and amplitude of 40 kHz and 70 µm respectively. The distance between the horn and workpiece surface was remained fixed at 200 µm using a precision control stage. Texturing effects were investigated for different process timings of 1, 3 and 5 s. Thereafter, the specimens were cleaned in an ultrasonic bath for 5 mins to remove loose debris on the surface. The developed surfaces are characterized by optical and contact surface profiler. The optical microscopic images show a texture of circular spots on the workpiece surface indented by titanium spherical balls. Waviness patterns obtained from contact surface profiler supports the texturing effect produced from the proposed technique. Furthermore, water droplet tests were performed to show the efficacy of the proposed technique to develop hydrophilic surfaces and to quantify the texturing effect produced.

Keywords: surface texturing, surface modification, topography, ultrasonic

Procedia PDF Downloads 207
671 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers

Authors: Marcus Klein, Martina GrießBach, Richard Kupke

Abstract:

The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.

Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology

Procedia PDF Downloads 296
670 The Urban Expansion Characterization of the Bir El Djir Municipality using Remote Sensing and GIS

Authors: Fatima Achouri, Zakaria Smahi

Abstract:

Bir El Djir is an important coastal township in Oran department, located at 450 Km far away from Algiers on northwest of Algeria. In this coastal area, the urban sprawl is one of the main problems that reduce the limited highly fertile land. So, using the remote sensing and GIS technologies have shown their great capabilities to solve many earth resources issues. The aim of this study is to produce land use and cover map for the studied area at varied periods to monitor possible changes that may occurred, particularly in the urban areas and subsequently predict likely changes. For this, two spatial images SPOT and Landsat satellites from 1987 and 2014 respectively were used to assess the changes of urban expansion and encroachment during this period with photo-interpretation and GIS approach. The results revealed that the town of Bir El Djir has shown a highest growth rate in the period 1987-2014 which is 521.1 hectares in terms of area. These expansions largely concern the new real estate constructions falling within the social and promotional housing programs launched by the government. Indeed, during the last census period (1998 -2008), the population of this town has almost doubled from 73 029 to 152 151 inhabitants with an average annual growth of 5.2%. This also significant population growth is causing an accelerated urban expansion of the periphery which causing its conurbation with the towns of Oran in the West side. The most urban expansion is characterized by the new construction in the form of spontaneous or peripheral precarious habitat, but also unstructured slums settled especially in the southeastern part of town.

Keywords: urban expansion, remote sensing, photo-interpretation, spatial dynamics

Procedia PDF Downloads 254
669 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 88
668 Neural Graph Matching for Modification Similarity Applied to Electronic Document Comparison

Authors: Po-Fang Hsu, Chiching Wei

Abstract:

In this paper, we present a novel neural graph matching approach applied to document comparison. Document comparison is a common task in the legal and financial industries. In some cases, the most important differences may be the addition or omission of words, sentences, clauses, or paragraphs. However, it is a challenging task without recording or tracing the whole edited process. Under many temporal uncertainties, we explore the potentiality of our approach to proximate the accurate comparison to make sure which element blocks have a relation of edition with others. In the beginning, we apply a document layout analysis that combines traditional and modern technics to segment layouts in blocks of various types appropriately. Then we transform this issue into a problem of layout graph matching with textual awareness. Regarding graph matching, it is a long-studied problem with a broad range of applications. However, different from previous works focusing on visual images or structural layout, we also bring textual features into our model for adapting this domain. Specifically, based on the electronic document, we introduce an encoder to deal with the visual presentation decoding from PDF. Additionally, because the modifications can cause the inconsistency of document layout analysis between modified documents and the blocks can be merged and split, Sinkhorn divergence is adopted in our neural graph approach, which tries to overcome both these issues with many-to-many block matching. We demonstrate this on two categories of layouts, as follows., legal agreement and scientific articles, collected from our real-case datasets.

Keywords: document comparison, graph matching, graph neural network, modification similarity, multi-modal

Procedia PDF Downloads 158
667 Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner

Authors: Beier Zhu, Rui Zhang, Qi Song

Abstract:

Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.

Keywords: bounding box regression, convolutional networks, fixed-layout documents, text localization

Procedia PDF Downloads 172
666 Electrical Properties of Nanocomposite Fibres Based On Cellulose and Graphene Nanoplatelets Prepared Using Ionic Liquids

Authors: Shaya Mahmoudian, Mohammad Reza Sazegar, Nazanin Afshari

Abstract:

Graphene, a single layer of carbon atoms in a hexagonal lattice, has recently attracted great attention due to its unique mechanical, thermal and electrical properties. The high aspect ratio and unique surface features of graphene resulted in significant improvements of the nano composites properties. In this study, nano composite fibres made of cellulose and graphene nano platelets were wet spun from solution by using ionic liquid, 1-ethyl-3-methylimidazolium acetate (EMIMAc) as solvent. The effect of graphene loading on the thermal and electrical properties of the nanocomposite fibres was investigated. The nano composite fibres characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM) analysis. XRD analysis revealed a cellulose II crystalline structure for regenerated cellulose and the nano composite fibres. SEM images showed a homogenous morphology and round cross section for the nano composite fibres along with well dispersion of graphene nano platelets in regenerated cellulose matrix. The incorporation of graphene into cellulose matrix generated electrical conductivity. At 6 wt. % of graphene, the electrical conductivity was 4.7 × 10-4 S/cm. The nano composite fibres also showed considerable improvements in thermal stability and char yield compared to pure regenerated cellulose fibres. This work provides a facile and environmentally friendly method of preparing nano composite fibres based on cellulose and graphene nano platelets that can find several applications in cellulose-based carbon fibres, conductive fibres, apparel, etc.

Keywords: nanocomposite, graphene nanoplatelets, regenerated cellulose, electrical properties

Procedia PDF Downloads 332
665 Laparoscopic Proximal Gastrectomy in Gastroesophageal Junction Tumours

Authors: Ihab Saad Ahmed

Abstract:

Background For Siewert type I and II gastroesophageal junction tumor (GEJ) laparoscopic proximal gastrectomy can be performed. It is associated with several perioperative benefits compared with open proximal gastrectomy. The use of laparoscopic proximal gastrectomy (LPG) has become an increasingly popular approach for select tumors Methods We describe our technique for LPG, including the preoperative work-up, illustrated images of the main principle steps of the surgery, and our postoperative course. Results Thirteen pts (nine males, four female) with type I, II (GEJ) adenocarcinoma had laparoscopic radical proximal gastrectomy and D2 lymphadenectomy. All of our patient received neoadjuvant chemotherapy, eleven patients had intrathoracic anastomosis through mini thoracotomy (two hand sewn end to end anastomoses and the other 9 patient end to side using circular stapler), two patients with intrathoracic anastomosis had flap and wrap technique, two patients had thoracoscopic esophageal and mediastinal lymph node dissection with cervical anastomosis The mean blood loss 80ml, no cases were converted to open. The mean operative time 250 minute Average LN retrieved 19-25, No sever complication such as leakage, stenosis, pancreatic fistula ,or intra-abdominal abscess were reported. Only One patient presented with empyema 1.5 month after discharge that was managed conservatively. Conclusion For carefully selected patients, LPG in GEJ tumour type I and II is a safe and reasonable alternative for open technique , which is associated with similar oncologic outcomes and low morbidity. It showed less blood loss, respiratory infections, with similar 1- and 3-year survival rates.

Keywords: LPG(laparoscopic proximal gastrectomy, GEJ( gastroesophageal junction tumour), d2 lymphadenectomy, neoadjuvant cth

Procedia PDF Downloads 101
664 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis

Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal

Abstract:

Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.

Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix

Procedia PDF Downloads 74
663 On the Development of Medical Additive Manufacturing in Egypt

Authors: Khalid Abdelghany

Abstract:

Additive Manufacturing (AM) is the manufacturing technology that is used to fabricate fast products direct from CAD models in very short time and with minimum operation steps. Jointly with the advancement in medical computer modeling, AM proved to be a very efficient tool to help physicians, orthopedic surgeons and dentists design and fabricate patient-tailored surgical guides, templates and customized implants from the patient’s CT / MRI images. AM jointly with computer-assisted designing/computer-assisted manufacturing (CAD/CAM) technology have enabled medical practitioners to tailor physical models in a patient-and purpose-specific fashion and helped to design and manufacture of templates, appliances and devices with a high range of accuracy using biocompatible materials. In developing countries, there are some technical and financial limitations of implementing such advanced tools as an essential portion of medical applications. CMRDI institute in Egypt has been working in the field of Medical Additive Manufacturing since 2003 and has assisted in the recovery of hundreds of poor patients using these advanced tools. This paper focuses on the surgical and dental use of 3D printing technology in Egypt as a developing country. The presented case studies have been designed and processed using the software tools and additive manufacturing machines in CMRDI through cooperative engineering and medical works. Results showed that the implementation of the additive manufacturing tools in developed countries is successful and could be economical comparing to long treatment plans.

Keywords: additive manufacturing, dental and orthopeadic stents, patient specific surgical tools, titanium implants

Procedia PDF Downloads 292
662 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy

Authors: Okhee Woo, Hye Seon Shin

Abstract:

Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.

Keywords: DBT, breast cancer, stereotactic biopsy, mammography

Procedia PDF Downloads 282
661 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: retail stores, faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition

Procedia PDF Downloads 129
660 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: finite element method, growth, residual stress, soft tissue

Procedia PDF Downloads 252