Search results for: Mueller images
1406 Synthesis, Characterization and Impedance Analysis of Polypyrrole/La0.7Ca0.3MnO3 Nanocomposites
Authors: M. G. Smitha, M. V. Murugendrappa
Abstract:
Perovskite manganite La0.7Ca0.3MnO3 was synthesized by Sol-gel method. Polymerization of pyrrole was carried by in-situ polymerization method. The composite of pyrrole (Py)/La0.7Ca0.3MnO3 composite in the presence of oxidizing agent ammonium per sulphate to synthesize polypyrrole (PPy)/La0.7Ca0.3MnO3 (LCM) composite was carried out by the same in-situ polymerization method. The PPy/LCM composites were synthesized with varying compositions like 10, 20, 30, 40, and 50 wt.% of LCM in Py. The surface morphologies of these composites were analyzed by using scanning electron microscope (SEM). The images show that LCM particles are embedded in PPy chain. The impedance measurement of PPy/LCM at different temperature ranges from 30 to 180 °C was studied using impedance analyzer. The study shows that impedance is frequency and temperature dependent and it is found to decrease with increase in frequency and temperature.Keywords: polypyrrole, sol gel, impedance, composites
Procedia PDF Downloads 3731405 Visual Search Based Indoor Localization in Low Light via RGB-D Camera
Authors: Yali Zheng, Peipei Luo, Shinan Chen, Jiasheng Hao, Hong Cheng
Abstract:
Most of traditional visual indoor navigation algorithms and methods only consider the localization in ordinary daytime, while we focus on the indoor re-localization in low light in the paper. As RGB images are degraded in low light, less discriminative infrared and depth image pairs are taken, as the input, by RGB-D cameras, the most similar candidates, as the output, are searched from databases which is built in the bag-of-word framework. Epipolar constraints can be used to relocalize the query infrared and depth image sequence. We evaluate our method in two datasets captured by Kinect2. The results demonstrate very promising re-localization results for indoor navigation system in low light environments.Keywords: indoor navigation, low light, RGB-D camera, vision based
Procedia PDF Downloads 4581404 System Detecting Border Gateway Protocol Anomalies Using Local and Remote Data
Authors: Alicja Starczewska, Aleksander Nawrat, Krzysztof Daniec, Jarosław Homa, Kacper Hołda
Abstract:
Border Gateway Protocol is the main routing protocol that enables routing establishment between all autonomous systems, which are the basic administrative units of the internet. Due to the poor protection of BGP, it is important to use additional BGP security systems. Many solutions to this problem have been proposed over the years, but none of them have been implemented on a global scale. This article describes a system capable of building images of real-time BGP network topology in order to detect BGP anomalies. Our proposal performs a detailed analysis of BGP messages that come into local network cards supplemented by information collected by remote collectors in different localizations.Keywords: BGP, BGP hijacking, cybersecurity, detection
Procedia PDF Downloads 741403 The Microstructural and Mechanical Characterization of Organo-Clay-Modified Bitumen, Calcareous Aggregate, and Organo-Clay Blends
Authors: A. Gürses, T. B. Barın, Ç. Doğar
Abstract:
Bitumen has been widely used as the binder of aggregate in road pavement due to its good viscoelastic properties, as a viscous organic mixture with various chemical compositions. Bitumen is a liquid at high temperature and it becomes brittle at low temperatures, and this temperature-sensitivity can cause the rutting and cracking of the pavement and limit its application. Therefore, the properties of existing asphalt materials need to be enhanced. The pavement with polymer modified bitumen exhibits greater resistance to rutting and thermal cracking, decreased fatigue damage, as well as stripping and temperature susceptibility; however, they are expensive and their applications have disadvantages. Bituminous mixtures are composed of very irregular aggregates bound together with hydrocarbon-based asphalt, with a low volume fraction of voids dispersed within the matrix. Montmorillonite (MMT) is a layered silicate with low cost and abundance, which consists of layers of tetrahedral silicate and octahedral hydroxide sheets. Recently, the layered silicates have been widely used for the modification of polymers, as well as in many different fields. However, there are not too much studies related with the preparation of the modified asphalt with MMT, currently. In this study, organo-clay-modified bitumen, and calcareous aggregate and organo-clay blends were prepared by hot blending method with OMMT, which has been synthesized using a cationic surfactant (Cetyltrymethylammonium bromide, CTAB) and long chain hydrocarbon, and MMT. When the exchangeable cations in the interlayer region of pristine MMT were exchanged with hydrocarbon attached surfactant ions, the MMT becomes organophilic and more compatible with bitumen. The effects of the super hydrophobic OMMT onto the micro structural and mechanic properties (Marshall Stability and volumetric parameters) of the prepared blends were investigated. Stability and volumetric parameters of the blends prepared were measured using Marshall Test. Also, in order to investigate the morphological and micro structural properties of the organo-clay-modified bitumen and calcareous aggregate and organo-clay blends, their SEM and HRTEM images were taken. It was observed that the stability and volumetric parameters of the prepared mixtures improved significantly compared to the conventional hot mixes and even the stone matrix mixture. A micro structural analysis based on SEM images indicates that the organo-clay platelets dispersed in the bitumen have a dominant role in the increase of effectiveness of bitumen - aggregate interactions.Keywords: hot mix asphalt, stone matrix asphalt, organo clay, Marshall test, calcareous aggregate, modified bitumen
Procedia PDF Downloads 2361402 Preliminary Results on a Study of Antimicrobial Susceptibility Testing of Bacillus anthracis Strains Isolated during Anthrax Outbreaks in Italy from 2001 to 2017
Authors: Viviana Manzulli, Luigina Serrecchia, Adelia Donatiello, Valeria Rondinone, Sabine Zange, Alina Tscherne, Antonio Parisi, Antonio Fasanella
Abstract:
Anthrax is a zoonotic disease that affects a wide range of animal species (primarily ruminant herbivores), and can be transmitted to humans through consumption or handling of contaminated animal products. The etiological agent B.anthracis is able to survive in unfavorable environmental conditions by forming endospore which remain viable in the soil for many decades. Furthermore, B.anthracis is considered as one of the most feared agents to be potentially misused as a biological weapon and the importance of the disease and its treatment in humans has been underscored before the bioterrorism events in the United States in 2001. Due to the often fatal outcome of human cases, antimicrobial susceptibility testing plays especially in the management of anthrax infections an important role. In Italy, animal anthrax is endemic (predominantly found in the southern regions and on islands) and is characterized by sporadic outbreaks occurring mainly during summer. Between 2012 and 2017 single human cases of cutaneous anthrax occurred. In this study, 90 diverse strains of B.anthracis, isolated in Italy from 2001 to 2017, were screened to their susceptibility to sixteen clinically relevant antimicrobial agents by using the broth microdilution method. B.anthracis strains selected for this study belong to the strain collection stored at the Anthrax Reference Institute of Italy located inside the Istituto Zooprofilattico Sperimentale of Puglia and Basilicata. The strains were isolated at different time points and places from various matrices (human, animal and environmental). All strains are a representative of over fifty distinct MLVA 31 genotypes. The following antibiotics were used for testing: gentamicin, ceftriaxone, streptomycin, penicillin G, clindamycin, chloramphenicol, vancomycin, linezolid, cefotaxime, tetracycline, erythromycin, rifampin, amoxicillin, ciprofloxacin, doxycycline and trimethoprim. A standard concentration of each antibiotic was prepared in a specific diluent, which were then twofold serial diluted. Therefore, each wells contained: bacterial suspension of 1–5x104 CFU/mL in Mueller-Hinton Broth (MHB), the antibiotic to be tested at known concentration and resazurin, an indicator of cell growth. After incubation overnight at 37°C, the wells were screened for color changes caused by the resazurin: a change from purple to pink/colorless indicated cell growth. The lowest concentration of antibiotic that prevented growth represented the minimal inhibitory concentration (MIC). This study suggests that B.anthracis remains susceptible in vitro to many antibiotics, in addition to doxycycline (MICs ≤ 0,03 µg/ml), ciprofloxacin (MICs ≤ 0,03 µg/ml) and penicillin G (MICs ≤ 0,06 µg/ml), recommend by CDC for the treatment of human cases and for prophylactic use after exposure to the spores. In fact, the good activity of gentamicin (MICs ≤ 0,25 µg/ml), streptomycin (MICs ≤ 1 µg/ml), clindamycin (MICs ≤ 0,125 µg/ml), chloramphenicol(MICs ≤ 4 µg/ml), vancomycin (MICs ≤ 2 µg/ml), linezolid (MICs ≤ 2 µg/ml), tetracycline (MICs ≤ 0,125 µg/ml), erythromycin (MICs ≤ 0,25 µg/ml), rifampin (MICs ≤ 0,25 µg/ml), amoxicillin (MICs ≤ 0,06 µg/ml), towards all tested B.anthracis strains demonstrates an appropriate alternative choice for prophylaxis and/or treatment. All tested B.anthracis strains showed intermediate susceptibility to the cephalosporins (MICs ≥ 16 µg/ml) and resistance to trimethoprim (MICs ≥ 128 µg/ml).Keywords: Bacillus anthracis, antibiotic susceptibility, treatment, minimum inhibitory concentration
Procedia PDF Downloads 2091401 Enhancement Effect of Superparamagnetic Iron Oxide Nanoparticle-Based MRI Contrast Agent at Different Concentrations and Magnetic Field Strengths
Authors: Bimali Sanjeevani Weerakoon, Toshiaki Osuga, Takehisa Konishi
Abstract:
Magnetic Resonance Imaging Contrast Agents (MRI-CM) are significant in the clinical and biological imaging as they have the ability to alter the normal tissue contrast, thereby affecting the signal intensity to enhance the visibility and detectability of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles, coated with dextran or carboxydextran are currently available for clinical MR imaging of the liver. Most SPIO contrast agents are T2 shortening agents and Resovist (Ferucarbotran) is one of a clinically tested, organ-specific, SPIO agent which has a low molecular carboxydextran coating. The enhancement effect of Resovist depends on its relaxivity which in turn depends on factors like magnetic field strength, concentrations, nanoparticle properties, pH and temperature. Therefore, this study was conducted to investigate the impact of field strength and different contrast concentrations on enhancement effects of Resovist. The study explored the MRI signal intensity of Resovist in the physiological range of plasma from T2-weighted spin echo sequence at three magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4, r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast concentrations by a mathematical simulation. Relaxivities of r1 and r2 (L mmol-1 Sec-1) were obtained from a previous study and the selected concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were simulated using TR/TE ratio as 2000 ms /100 ms. According to the reference literature, with increasing magnetic field strengths, the r1 relaxivity tends to decrease while the r2 did not show any systematic relationship with the selected field strengths. In parallel, this study results revealed that the signal intensity of Resovist at lower concentrations tends to increase than the higher concentrations. The highest reported signal intensity was observed in the low field strength of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L, respectively. Furthermore, it was revealed that, the concentrations higher than the above, the signal intensity was decreased exponentially. An inverse relationship can be found between the field strength and T2 relaxation time, whereas, the field strength was increased, T2 relaxation time was decreased accordingly. However, resulted T2 relaxation time was not significantly different between 0.47 T and 1.5 T in this study. Moreover, a linear correlation of transverse relaxation rates (1/T2, s–1) with the concentrations of Resovist can be observed. According to these results, it can conclude that the concentration of SPIO nanoparticle contrast agents and the field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR imaging those two parameters should be considered prudently.Keywords: Concentration, resovist, field strength, relaxivity, signal intensity
Procedia PDF Downloads 3511400 Covid-19 Lockdown Experience of Elderly Female as Reflected in Their Artwork
Authors: Liat Shamri-Zeevi, Neta Ram-Vlasov
Abstract:
Today the world as a whole is attempting to cope with the COVID-19, which has affected all facets of personal and social life from country-wide confinement to maintaining social distance and taking protective measures to maintain hygiene. One of the populations faced with the most severe restrictions is seniors. Various studies have shown that creativity plays a crucial role in dealing with crisis events. Painting - regardless of media - allows for emotional and cognitive processing of these situations, and enables the expression of experiences in a tangible creative way that conveys and endows meaning to the artwork. The current study was conducted in Israel immediately after a 6-week lockdown. It was designed to specifically examine the impact of the COVID-19 pandemic on the quality of life of elderly women as reflected in their artworks. The sample was composed of 21 Israeli women aged 60-90, in good mental health (without diagnosed dementia or Alzheimer's), all of whom were Hebrew-speaking, and retired with an extended family, who indicated that they painted and had engaged in artwork on an ongoing basis throughout the lockdown (from March 12 to May 30, 2020). The participants' artworks were collected, and a semi-structured in-depth interview was conducted that lasted one to two hours. The participants were asked about their feelings during the pandemic and the artworks they produced during this time, and completed a questionnaire on well-being and mental health. The initial analysis of the interviews and artworks revealed themes related to the specific role of each piece of artwork. The first theme included notions that the artwork was an activity and a framework for doing, which supported positive emotions, and provided a sense of vitality during the closure. Most of the participants painted images of nature and growth which were ascribed concrete and symbolic meaning. The second theme was that the artwork enabled the processing of difficult and /or conflicting emotions related to the situation, including anxiety about death and loneliness that were symbolically expressed in the artworks, such as images of the Corona virus and the respiratory machines. The third theme suggested that the time and space prompted by the lockdown gave the participants time for a gathering together of the self, and freed up time for creative activities. Many participants stated that they painted more and more frequently during the Corona lockdown. At the conference, additional themes and findings will be presented.Keywords: Corona virus, artwork, quality of life of elderly
Procedia PDF Downloads 1411399 Anisotropic Approach for Discontinuity Preserving in Optical Flow Estimation
Authors: Pushpendra Kumar, Sanjeev Kumar, R. Balasubramanian
Abstract:
Estimation of optical flow from a sequence of images using variational methods is one of the most successful approach. Discontinuity between different motions is one of the challenging problem in flow estimation. In this paper, we design a new anisotropic diffusion operator, which is able to provide smooth flow over a region and efficiently preserve discontinuity in optical flow. This operator is designed on the basis of intensity differences of the pixels and isotropic operator using exponential function. The combination of these are used to control the propagation of flow. Experimental results on the different datasets verify the robustness and accuracy of the algorithm and also validate the effect of anisotropic operator in the discontinuity preserving.Keywords: optical flow, variational methods, computer vision, anisotropic operator
Procedia PDF Downloads 8721398 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 611397 Virtual Player for Learning by Observation to Assist Karate Training
Authors: Kazumoto Tanaka
Abstract:
It is well known that sport skill learning is facilitated by video observation of players’ actions in sports. The optimal viewpoint for the observation of actions depends on sport scenes. On the other hand, it is impossible to change viewpoint for the observation in general, because most videos are filmed from fixed points. The study has tackled the problem and focused on karate match as a first step. The study developed a method for observing karate player’s actions from any point of view by using 3D-CG model (i.e. virtual player) obtained from video images, and verified the effectiveness of the method on karate match.Keywords: computer graphics, karate training, learning by observation, motion capture, virtual player
Procedia PDF Downloads 2741396 Semigroups of Linear Transformations with Fixed Subspaces: Green’s Relations and Ideals
Authors: Yanisa Chaiya, Jintana Sanwong
Abstract:
Let V be a vector space over a field and W a subspace of V. Let Fix(V,W) denote the set of all linear transformations on V with fix all elements in W. In this paper, we show that Fix(V,W) is a semigroup under the composition of maps and describe Green’s relations on this semigroup in terms of images, kernels and the dimensions of subspaces of the quotient space V/W where V/W = {v+W : v is an element in V} with v+W = {v+w : w is an element in W}. Let dim(U) denote the dimension of a vector space U and Vα = {vα : v is an element in V} where vα is an image of v under a linear transformation α. For any cardinal number a let a'= min{b : b > a}. We also show that the ideals of Fix(V,W) are precisely the sets. Fix(r) ={α ∊ Fix(V,W) : dim(Vα/W) < r} where 1 ≤ r ≤ a' and a = dim(V/W). Moreover, we prove that if V is a finite-dimensional vector space, then every ideal of Fix(V,W) is principle.Keywords: Green’s relations, ideals, linear transformation semi-groups, principle ideals
Procedia PDF Downloads 2911395 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation
Authors: Nawras Kurzom, Avi Mendelsohn
Abstract:
The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.Keywords: musical tension, declarative memory, learning and memory, musical perception
Procedia PDF Downloads 971394 Numerical Calculation of Heat Transfer in Water Heater
Authors: Michal Spilacek, Martin Lisy, Marek Balas, Zdenek Skala
Abstract:
This article is trying to determine the status of flue gas that is entering the KWH heat exchanger from combustion chamber in order to calculate the heat transfer ratio of the heat exchanger. Combination of measurement, calculation, and computer simulation was used to create a useful way to approximate the heat transfer rate. The measurements were taken by a number of sensors that are mounted on the experimental device and by a thermal imaging camera. The results of the numerical calculation are in a good correspondence with the real power output of the experimental device. Results show that the research has a good direction and can be used to propose changes in the construction of the heat exchanger, but still needs enhancements.Keywords: heat exchanger, heat transfer rate, numerical calculation, thermal images
Procedia PDF Downloads 6131393 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 2951392 The Artificial Intelligence Technologies Used in PhotoMath Application
Authors: Tala Toonsi, Marah Alagha, Lina Alnowaiser, Hala Rajab
Abstract:
This report is about the Photomath app, which is an AI application that uses image recognition technology, specifically optical character recognition (OCR) algorithms. The (OCR) algorithm translates the images into a mathematical equation, and the app automatically provides a step-by-step solution. The application supports decimals, basic arithmetic, fractions, linear equations, and multiple functions such as logarithms. Testing was conducted to examine the usage of this app, and results were collected by surveying ten participants. Later, the results were analyzed. This paper seeks to answer the question: To what level the artificial intelligence features are accurate and the speed of process in this app. It is hoped this study will inform about the efficiency of AI in Photomath to the users.Keywords: photomath, image recognition, app, OCR, artificial intelligence, mathematical equations.
Procedia PDF Downloads 1701391 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 4501390 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 951389 The Impact of Coronal STIR Imaging in Routine Lumbar MRI: Uncovering Hidden Causes to Enhanced Diagnostic Yield of Back Pain and Sciatica
Authors: Maysoon Nasser Samhan, Somaya Alkiswani, Abdullah Alzibdeh
Abstract:
Background: Routine lumbar MRIs for back pain may yield normal results despite persistent symptoms, which means the possibility of other causes for this pain, which was not shown on the routine images. Research suggests including coronal STIR imaging to detect additional pathologies like sacroiliitis. Objectives: This study aims to enhance diagnostic accuracy and aid in determining treatment processes for patients with persistent back pain who have normal routine lumbar MRI (T1 and T2 images) by incorporating coronal STIR into the examination. Methods: A prospectively conducted study involving 274 patients, 115 males and 159 females, with an age range of 6–92 years, reviewed their medical records and imaging data following a lumbar spine MRI. This study included patients with back pain and sciatica as their primary complaints, all of whom underwent lumbar spine MRIs at our hospital to identify potential pathologies. Using a GE Signa HD 1.5T MRI System, each patient received a standard MRI protocol that included T1 and T2 sagittal and axial sequences, as well as a coronal STIR sequence. We collected relevant MRI findings, including abnormalities and structural variations, from radiology reports. We classified these findings into tables and documented them as counts and percentages, using Fisher’s exact test to assess differences between categorical variables. We conducted a statistical analysis using Prism GraphPad software version 10.1.2. The study adhered to ethical guidelines, institutional review board approvals, and patient confidentiality regulations. Results: Exclusion of the coronal STIR sequence led to 83 subjects (30.29%) being classified as within normal limits on MRI examination. 36 patients without abnormalities on T1 and T2 sequences showed abnormalities on the coronal STIR sequence, with 26 cases attributed to spinal pathologies and 10 to non-spinal pathologies. In addition to that, Fisher's exact test demonstrated a significant association between sacroiliitis diagnosis and abnormalities identified solely through the coronal STIR sequence (P < 0.0001). Conclusion: Implementing coronal STIR imaging as part of routine lumbar MRI protocols has the potential to improve patient care by facilitating a more comprehensive evaluation and management of persistent back pain.Keywords: magnetic resonance imaging, lumber MRI, radiology, neurology
Procedia PDF Downloads 71388 Pose Normalization Network for Object Classification
Authors: Bingquan Shen
Abstract:
Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline.Keywords: convolutional neural networks, object classification, pose normalization, viewpoint invariant
Procedia PDF Downloads 3511387 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 3621386 Multimetallic and Multiferocenyl Assemblies of Ferocenyl-Based Dithiophospohonate and Their Electrochemical Properties
Authors: J. Tomilla Ajayi, Werner E. Van Zyl
Abstract:
This work presents an overview of the reaction of 2, 4-diferrocenyl-1, 3-dithiadiphosphetane-2, 4-disulfide (Ferrocenyl Lawesson’s reagent) with water to produce the non-symmetric, ferocenyl dithiophosphonic acid respectively in high yields. These acids were readily deprotonated by anhydrous Ammonia to yield the corresponding ammonium salt NH4S2PFcOH. These were complex to Ni (II) in molar ratio 1:1 and 1:2. The resulting complex from the reaction formed same compound with different isomers (Cis and Trans) and also compound with multimetallic coordination. Quality X-ray crystals were formed from THF/Ether. The compounds were characterized by 1H, 31P NMR, and FTIR. Bulk purity were confirmed by either ESI-MS or elemental analysis and The XRD images were obtained using single crystal X-ray crystallographic studies. The electrochemical investigation of the Compounds were carried out using cyclic voltammetry.Keywords: ferrocenyl, dithiophosphonate, isomer, coordination
Procedia PDF Downloads 2471385 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 981384 3 Dimensional (3D) Assesment of Hippocampus in Alzheimer’s Disease
Authors: Mehmet Bulent Ozdemir, Sultan Çagirici, Sahika Pinar Akyer, Fikri Turk
Abstract:
Neuroanatomical appearance can be correlated with clinical or other characteristics of illness. With the introduction of diagnostic imaging machines, producing 3D images of anatomic structures, calculating the correlation between subjects and pattern of the structures have become possible. The aim of this study is to examine the 3D structure of hippocampus in cases with Alzheimer disease in different dementia severity. For this purpose, 62 female and 38 male- 68 patients’s (age range between 52 and 88) MR scanning were imported to the computer. 3D model of each right and left hippocampus were developed by a computer aided propramme-Surf Driver 3.5. Every reconstruction was taken by the same investigator. There were different apperance of hippocampus from normal to abnormal. In conclusion, These results might improve the understanding of the correlation between the morphological changes in hippocampus and clinical staging in Alzheimer disease.Keywords: Alzheimer disease, hippocampus, computer-assisted anatomy, 3D
Procedia PDF Downloads 4771383 Multi-Focus Image Fusion Using SFM and Wavelet Packet
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.Keywords: multi-focus image fusion, wavelet packet, spatial frequency measurement
Procedia PDF Downloads 4731382 Relation between Initial Stability of the Dental Implant and Bone-Implant Contact Level
Authors: Jui-Ting Hsu, Heng-Li Huang, Ming-Tzu Tsai, Kuo-Chih Su, Lih-Jyh Fuh
Abstract:
The objectives of this study were to measure the initial stability of the dental implant (ISQ and PTV) in the artificial foam bone block with three different quality levels. In addition, the 3D bone to implant contact percentage (BIC%) was measured based on the micro-computed tomography images. Furthermore, the relation between the initial stability of dental implant (ISQ and PTV) and BIC% were calculated. The experimental results indicated that enhanced the material property of the artificial foam bone increased the initial stability of the dental implant. The Pearson’s correlation coefficient between the BIC% and the two approaches (ISQ and PTV) were 0.652 and 0.745.Keywords: dental implant, implant stability quotient, peak insertion torque, bone-implant contact, micro-computed tomography
Procedia PDF Downloads 5761381 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics
Authors: Kunal Bhardwaj, Christine Browne
Abstract:
With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness
Procedia PDF Downloads 1221380 A Note on the Fractal Dimension of Mandelbrot Set and Julia Sets in Misiurewicz Points
Authors: O. Boussoufi, K. Lamrini Uahabi, M. Atounti
Abstract:
The main purpose of this paper is to calculate the fractal dimension of some Julia Sets and Mandelbrot Set in the Misiurewicz Points. Using Matlab to generate the Julia Sets images that match the Misiurewicz points and using a Fractal software, we were able to find different measures that characterize those fractals in textures and other features. We are actually focusing on fractal dimension and the error calculated by the software. When executing the given equation of regression or the log-log slope of image a Box Counting method is applied to the entire image, and chosen settings are available in a FracLAc Program. Finally, a comparison is done for each image corresponding to the area (boundary) where Misiurewicz Point is located.Keywords: box counting, FracLac, fractal dimension, Julia Sets, Mandelbrot Set, Misiurewicz Points
Procedia PDF Downloads 2141379 Automatic Segmentation of Lung Pleura Based On Curvature Analysis
Authors: Sasidhar B., Bhaskar Rao N., Ramesh Babu D. R., Ravi Shankar M.
Abstract:
Segmentation of lung pleura is a preprocessing step in Computer-Aided Diagnosis (CAD) which helps in reducing false positives in detection of lung cancer. The existing methods fail in extraction of lung regions with the nodules at the pleura of the lungs. In this paper, a new method is proposed which segments lung regions with nodules at the pleura of the lungs based on curvature analysis and morphological operators. The proposed algorithm is tested on 06 patient’s dataset which consists of 60 images of Lung Image Database Consortium (LIDC) and the results are found to be satisfactory with 98.3% average overlap measure (AΩ).Keywords: curvature analysis, image segmentation, morphological operators, thresholding
Procedia PDF Downloads 5941378 Biimodal Biometrics System Using Fusion of Iris and Fingerprint
Authors: Attallah Bilal, Hendel Fatiha
Abstract:
This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.Keywords: iris, fingerprint, sum rule, fusion
Procedia PDF Downloads 3661377 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 149