Search results for: X-ray computed microtomography
837 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 348836 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study
Authors: Ajit Brindhaban
Abstract:
Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging
Procedia PDF Downloads 147835 Defect Correlation of Computed Tomography and Serial Sectioning in Additively Manufactured Ti-6Al-4V
Authors: Bryce R. Jolley, Michael Uchic
Abstract:
This study presents initial results toward the correlative characterization of inherent defects of Ti-6Al-4V additive manufacture (AM). X-Ray Computed Tomography (CT) defect data are compared and correlated with microscopic photographs obtained via automated serial sectioning. The metal AM specimen was manufactured out of Ti-6Al-4V virgin powder to specified dimensions. A post-contour was applied during the fabrication process with a speed of 1050 mm/s, power of 260 W, and a width of 140 µm. The specimen was stress relief heat-treated at 16°F for 3 hours. Microfocus CT imaging was accomplished on the specimen within a predetermined region of the build. Microfocus CT imaging was conducted with parameters optimized for Ti-6Al-4V additive manufacture. After CT imaging, a modified RoboMet. 3D version 2 was employed for serial sectioning and optical microscopy characterization of the same predetermined region. Automated montage capture with sub-micron resolution, bright-field reflection, 12-bit monochrome optical images were performed in an automated fashion. These optical images were post-processed to produce 2D and 3D data sets. This processing included thresholding and segmentation to improve visualization of defect features. The defects observed from optical imaging were compared and correlated with the defects observed from CT imaging over the same predetermined region of the specimen. Quantitative results of area fraction and equivalent pore diameters obtained via each method are presented for this correlation. It is shown that Microfocus CT imaging does not capture all inherent defects within this Ti-6Al-4V AM sample. Best practices for this correlative effort are also presented as well as the future direction of research resultant from this current study.Keywords: additive manufacture, automated serial sectioning, computed tomography, nondestructive evaluation
Procedia PDF Downloads 140834 The Weights of Distinguished sl2-Subalgebras in Dn
Authors: Yassir I. Dinar
Abstract:
We computed the weights of the adjoint action of distinguished sl2-triples in Lie algebra of type Dn using mathematical induction.Keywords: lie algebra, root systems, representation theory, nilpotent orbits
Procedia PDF Downloads 293833 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves
Authors: Hanifeh Imanian, Morteza Kolahdoozan
Abstract:
The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill
Procedia PDF Downloads 232832 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics
Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy
Abstract:
Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance
Procedia PDF Downloads 147831 Estimation of Damping Force of Double Ended Shear Mode Magnetorheological Damper Using Computational Analysis
Authors: Gurubasavaraju T. M.
Abstract:
The magnetorheological (MR) damper could provide variable damping force with respect to the different input magnetic field. The damping force could be estimated through computational analysis using finite element and computational fluid dynamics analysis. The double-ended damper operates without changing the total volume of fluid. In this paper, damping force of double ended damper under different magnetic field is computed. Initially, the magneto-statics analysis carried out to evaluate the magnetic flux density across the fluid flow gap. The respective change in the rheology of the MR fluid is computed by using the experimentally fitted polynomial equation of shear stress versus magnetic field plot of MR fluid. The obtained values are substituted in the Herschel Buckley model to express the non-Newtonian behavior of MR fluid. Later, using computational fluid dynamic (CFD) analysis damping characteristics in terms of force versus velocity and force versus displacement for the respective magnetic field is estimated. The purpose of the present approach is to characterize the preliminary designed MR damper before fabricating.Keywords: MR fluid, double ended MR damper, CFD, FEA
Procedia PDF Downloads 179830 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 161829 Production of Amorphous Boron Powder via Chemical Vapor Deposition (CVD)
Authors: Meltem Bolluk, Ismail Duman
Abstract:
Boron exhibits the properties of high melting temperature (2273K to 2573 K), high hardness (Mohs: 9,5), low density (2,340 g/cm3), high chemical resistance, high strength, and semiconductivity (band gap:1,6-2,1 eV). These superior properties enable to use it in several high-tech areas from electronics to nuclear industry and especially in high temperature metallurgy. Amorphous boron and crystalline boron have different application areas. Amorphous boron powder (directly amorphous and/or α-rhombohedral) is preferred in rocket firing, airbag inflating and in fabrication of superconducting MgB2 wires. The conventional ways to produce elemental boron with a purity of 85 pct to 95 prc are metallothermic reduction, fused salt electrolysis and mechanochemical synthesis; but the only way to produce high-purity boron powders is Chemical Vapour Deposition (Hot Surface CVD). In this study; amorphous boron powders with a minimum purity of 99,9 prc were synthesized in quartz tubes using BCl3-H2 gas mixture by CVD. Process conditions based on temperature and gas flow rate were determined. Thermodynamical interpretation of BCl3-H2 system for different temperatures and molar rates were performed using Fact Sage software. The characterization of powders was examined by using Xray diffraction (XRD), Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM), Stereo Microscope (SM), Helium gas pycnometer analysis. The purities of final products were determined by titration after lime fusion.Keywords: amorphous boron, CVD, powder production, powder characterization
Procedia PDF Downloads 215828 Reconsidering Taylor’s Law with Chaotic Population Dynamical Systems
Authors: Yuzuru Mitsui, Takashi Ikegami
Abstract:
The exponents of Taylor’s law in deterministic chaotic systems are computed, and their meanings are intensively discussed. Taylor’s law is the scaling relationship between the mean and variance (in both space and time) of population abundance, and this law is known to hold in a variety of ecological time series. The exponents found in the temporal Taylor’s law are different from those of the spatial Taylor’s law. The temporal Taylor’s law is calculated on the time series from the same locations (or the same initial states) of different temporal phases. However, with the spatial Taylor’s law, the mean and variance are calculated from the same temporal phase sampled from different places. Most previous studies were done with stochastic models, but we computed the temporal and spatial Taylor’s law in deterministic systems. The temporal Taylor’s law evaluated using the same initial state, and the spatial Taylor’s law was evaluated using the ensemble average and variance. There were two main discoveries from this work. First, it is often stated that deterministic systems tend to have the value two for Taylor’s exponent. However, most of the calculated exponents here were not two. Second, we investigated the relationships between chaotic features measured by the Lyapunov exponent, the correlation dimension, and other indexes with Taylor’s exponents. No strong correlations were found; however, there is some relationship in the same model, but with different parameter values, and we will discuss the meaning of those results at the end of this paper.Keywords: chaos, density effect, population dynamics, Taylor’s law
Procedia PDF Downloads 173827 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 116826 O-(2-18F-Fluoroethyl)-L-Tyrosine Positron Emission Tomography/Computed Tomography in Patients with Suspicious Recurrent Low and High-Grade Glioma
Authors: Mahkameh Asadi, Habibollah Dadgar
Abstract:
The precise definition margin of high and low-grade glioma is crucial for choosing best treatment approach after surgery and radio-chemotherapy. The aim of the current study was to assess the O-(2-18F-fluoroethyl)-L-tyrosine (18F-FET) positron emission tomography (PET)/computed tomography (CT) in patients with low (LGG) and high grade glioma (HGG). We retrospectively analyzed 18F-FET PET/CT of 10 patients (age: 33 ± 12 years) with suspicious for recurrent LGG and HGG. The final decision of recurrence was made by magnetic resonance imaging (MRI) and registered clinical data. While response to radio-chemotherapy by MRI is often complex and sophisticated due to the edema, necrosis, and inflammation, emerging amino acid PET leading to better interpretations with more specifically differentiate true tumor boundaries from equivocal lesions. Therefore, integrating amino acid PET in the management of glioma to complement MRI will significantly improve early therapy response assessment, treatment planning, and clinical trial design.Keywords: positron emission tomography, amino acid positron emission tomography, magnetic resonance imaging, low and high grade glioma
Procedia PDF Downloads 172825 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis
Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers
Abstract:
SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality
Procedia PDF Downloads 90824 Multifunctional Bismuth-Based Nanoparticles as Theranostic Agent for Imaging and Radiation Therapy
Authors: Azimeh Rajaee, Lingyun Zhao, Shi Wang, Yaqiang Liu
Abstract:
In recent years many studies have been focused on bismuth-based nanoparticles as radiosensitizer and contrast agent in radiation therapy and imaging due to the high atomic number (Z = 82), high photoelectric absorption, low cost, and low toxicity. This study aims to introduce a new multifunctional bismuth-based nanoparticle as a theranostic agent for radiotherapy, computed tomography (CT) and magnetic resonance imaging (MRI). We synthesized bismuth ferrite (BFO, BiFeO3) nanoparticles by sol-gel method and surface of the nanoparticles were modified by Polyethylene glycol (PEG). After proved biocompatibility of the nanoparticles, the ability of them as contract agent in Computed tomography (CT) and magnetic resonance imaging (MRI) was investigated. The relaxation time rate (R2) in MRI and Hounsfield unit (HU) in CT imaging were increased with the concentration of the nanoparticles. Moreover, the effect of nanoparticles on dose enhancement in low energy was investigated by clonogenic assay. According to clonogenic assay, sensitizer enhancement ratios (SERs) were obtained as 1.35 and 1.76 for nanoparticle concentrations of 0.05 mg/ml and 0.1 mg/ml, respectively. In conclusion, our experimental results demonstrate that the multifunctional nanoparticles have the ability to employ as multimodal imaging and therapy to enhance theranostic efficacy.Keywords: molecular imaging, nanomedicine, radiotherapy, theranostics
Procedia PDF Downloads 314823 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 80822 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 263821 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 140820 An Application of Hip Arthroscopy after Acute Injury - A Case Report
Authors: Le Nguyen Binh, Luong Xuan Binh, Le Van Tuan, Tran Binh Duong, Truong Nguyen Khanh Hung, Do Le Hoang Son, Pham Quang Vinh, Hoang Quoc Huy, Nguyen Bach, Nguyen Quoc Khanh Le, Jiunn Horng Kang
Abstract:
Introduction: Traumatic hip dislocation is an emergency in young adult which can cause avascular necrosis of femoral head or osteoarthritis of hip joint. The reasons for these may be the loose body of bony or chondral fragments, which are difficult to be detected on CT scan or MRI. In those cases, Hip arthroscopy may be the method of choice for diagnosis and treatment of loose bodies in hip joint after traumatic dislocation. Methods: A case report is performed. A 55-year-old male patient was under hip arthroscopy to retrieve the loose body in the right hip joint. Results: The patient’s hip was reduced under anesthesia in the opeation room. Xray and CT scan post-reduction showed that his right hip was wide and a small fragment of femoral head (< 5mm) locking inside the joint. A hip arthroscopy was done to take the fragment out. Post-operation, the patient went under rehabilition. After 6 months, he can walk with full-weight bearing; no further dislocaion was noted, and the Harris score was 84 points. Conclusions: Although acute traumatic injury of hip joint is usually treated with open surgeries, these methods have many drawbacks, such as soft tissue destruction, blood-loss,….Despite its technical requirement, hip arthroscopy is less invasive and effective treatment. Therefore, it may be an alternative treatment for a traumatic hip injury and can be applied frequently in the near future.Keywords: hip dislocation, hip arthroscopy, hip osteoarthritis, acute hip trauma
Procedia PDF Downloads 84819 3D Vision Transformer for Cervical Spine Fracture Detection and Classification
Authors: Obulesh Avuku, Satwik Sunnam, Sri Charan Mohan Janthuka, Keerthi Yalamaddi
Abstract:
In the United States alone, there are over 1.5 million spine fractures per year, resulting in about 17,730 spinal cord injuries. The cervical spine is where fractures in the spine most frequently occur. The prevalence of spinal fractures in the elderly has increased, and in this population, fractures may be harder to see on imaging because of coexisting degenerative illness and osteoporosis. Nowadays, computed tomography (CT) is almost completely used instead of radiography for the imaging diagnosis of adult spine fractures (x-rays). To stop neurologic degeneration and paralysis following trauma, it is vital to trace any vertebral fractures at the earliest. Many approaches have been proposed for the classification of the cervical spine [2d models]. We are here in this paper trying to break the bounds and use the vision transformers, a State-Of-The-Art- Model in image classification, by making minimal changes possible to the architecture of ViT and making it 3D-enabled architecture and this is evaluated using a weighted multi-label logarithmic loss. We have taken this problem statement from a previously held Kaggle competition, i.e., RSNA 2022 Cervical Spine Fracture Detection.Keywords: cervical spine, spinal fractures, osteoporosis, computed tomography, 2d-models, ViT, multi-label logarithmic loss, Kaggle, public score, private score
Procedia PDF Downloads 113818 High-Resolution Computed Tomography Imaging Features during Pandemic 'COVID-19'
Authors: Sahar Heidary, Ramin Ghasemi Shayan
Abstract:
By the development of new coronavirus (2019-nCoV) pneumonia, chest high-resolution computed tomography (HRCT) has been one of the main investigative implements. To realize timely and truthful diagnostics, defining the radiological features of the infection is of excessive value. The purpose of this impression was to consider the imaging demonstrations of early-stage coronavirus disease 2019 (COVID-19) and to run an imaging base for a primary finding of supposed cases and stratified interference. The right prophetic rate of HRCT was 85%, sensitivity was 73% for all patients. Total accuracy was 68%. There was no important change in these values for symptomatic and asymptomatic persons. These consequences were besides free of the period of X-ray from the beginning of signs or interaction. Therefore, we suggest that HRCT is a brilliant attachment for early identification of COVID-19 pneumonia in both symptomatic and asymptomatic individuals in adding to the role of predictive gauge for COVID-19 pneumonia. Patients experienced non-contrast HRCT chest checkups and images were restored in a thin 1.25 mm lung window. Images were estimated for the existence of lung scratches & a CT severity notch was allocated separately for each patient based on the number of lung lobes convoluted.Keywords: COVID-19, radiology, respiratory diseases, HRCT
Procedia PDF Downloads 141817 Merging and Comparing Ontologies Generically
Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo
Abstract:
Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as categorical operations, relation algebras, and typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, labeled by (I), (C), (A), and (R), respectively, which are defined on an ontology merging system (D~M), where D is a non-empty set of the ontologies concerned, ~ is a binary relation on D modeling ontology aligning and M is a partial binary operation on D modeling ontology merging. Given an ontology repository, a finite set O ⊆ D, its merging closure Ô is the smallest set of ontologies, which contains the repository and is closed with respect to merging. If (I), (C), (A), and (R) are satisfied, then both D and Ô are partially ordered naturally by merging, Ô is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. We also show that the ontology merging system, given by ontology V -alignment pairs and pushouts, satisfies the properties: (I), (C), (A), and (R) so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.Keywords: ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout
Procedia PDF Downloads 892816 Covid-19, Diagnosis with Computed Tomography and Artificial Intelligence, in a Few Simple Words
Authors: Angelis P. Barlampas
Abstract:
Target: The (SARS-CoV-2) is still a threat. AI software could be useful, categorizing the disease into different severities and indicate the extent of the lesions. Materials and methods: AI is a new revolutionary technique, which uses powered computerized systems, to do what a human being does more rapidly, more easily, as accurate and diagnostically safe as the original medical report and, in certain circumstances, even better, saving time and helping the health system to overcome problems, such as work overload and human fatigue. Results: It will be given an effort to describe to the inexperienced reader (see figures), as simple as possible, how an artificial intelligence system diagnoses computed tomography pictures. First, the computerized machine learns the physiologic motives of lung parenchyma by being feeded with normal structured images of the lung tissue. Having being used to recognizing normal structures, it can then easily indentify the pathologic ones, as their images do not fit to known normal picture motives. It is the same way as when someone spends his free time in reading magazines with quizzes, such as <Keywords: covid-19, artificial intelligence, automated imaging, CT, chest imaging
Procedia PDF Downloads 50815 A Rare Case of Taenia solium Induced Ileo-Cecal Intussusception in an Adult
Authors: Naraporn Taemaitree, Pruet Areesawangvong, Satchachon Changthom, Tanin Titipungul
Abstract:
Adult intussusception, unlike childhood intussusception, is rare. Approximately 5-15% of cases are idiopathic without a lead point lesion. Secondary intussusception is caused by pathological conditions such as inflammatory bowel disease, postoperative adhesions, Meckel’s diverticulum, benign and malignant lesions, metastatic neoplasms, or even iatrogenically due to the presence of intestinal tubes, jejunostomy feeding tubes or after gastric surgery. Diagnosis can be delayed because of its longstanding, intermittent, and non-specific symptoms. Computed tomography is the most sensitive diagnostic modality and can help distinguish between intussusceptions with and without a lead point and lesion localization. This report presents the case of a 49-year-old man presented with increasing abdominal pain over the past three days, loss of appetite, constipation, and frequent vomiting. Computed tomography revealed distal small bowel obstruction at the right lower quadrant with thickened outer wall and internal non-dilated small bowel loop. Emergency exploratory laparotomy was performed to clear the obstruction, which upon inspection was caused by extremely long Taenia solium parasites.Keywords: intussusception, tape worm, Taenia solium, abdominal pain
Procedia PDF Downloads 132814 A Study of Stress and Coping Strategies of School Teachers
Authors: G.S. Patel
Abstract:
In this research paper the discussion have been made on teachers work mental stress and coping strategies. Stress Measurement scale was developed for school teachers. All the scientific steps of test construction was followed. For this test construction, different factors like teachers workplace, teachers' residential area, teachers' family life, teachers' ability and skills, economic factors and other factors to construct teachers stress measurement scale. In this research tool, situational statements have been made and teachers have to give a response in each statement on five-point rating scale what they experienced in their daily life. Special features of the test also established like validity and reliability of this test and also computed norms for its interpretation. A sample of 320 teachers of school teachers of Gujarat state was selected by Cluster sampling technique. t-test was computed for testing null hypothesis. The main findings of the present study are Urban area teachers feel more stressful situation compare to rural area teachers. Those teachers who live in the joint family feel less stress compare to teachers who live in a nuclear family. This research work is very useful to prepare list of activities to reduce teachers mental stress.Keywords: stress measurement scale, level of stress, validity, reliability, norms
Procedia PDF Downloads 193813 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device
Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin
Abstract:
Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer
Procedia PDF Downloads 55812 Replacement of the Distorted Dentition of the Cone Beam Computed Tomography Scan Models for Orthognathic Surgery Planning
Authors: T. Almutairi, K. Naudi, N. Nairn, X. Ju, B. Eng, J. Whitters, A. Ayoub
Abstract:
Purpose: At present Cone Beam Computed Tomography (CBCT) imaging does not record dental morphology accurately due to the scattering produced by metallic restorations and the reported magnification. The aim of this pilot study is the development and validation of a new method for the replacement of the distorted dentition of CBCT scans with the dental image captured by the digital intraoral camera. Materials and Method: Six dried skulls with orthodontics brackets on the teeth were used in this study. Three intra-oral markers made of dental stone were constructed which were attached to orthodontics brackets. The skulls were CBCT scanned, and occlusal surface was captured using TRIOS® 3D intraoral scanner. Marker based and surface based registrations were performed to fuse the digital intra-oral scan(IOS) into the CBCT models. This produced a new composite digital model of the skull and dentition. The skulls were scanned again using the commercially accurate Laser Faro® arm to produce the 'gold standard' model for the assessment of the accuracy of the developed method. The accuracy of the method was assessed by measuring the distance between the occlusal surfaces of the new composite model and the 'gold standard' 3D model of the skull and teeth. The procedure was repeated a week apart to measure the reproducibility of the method. Results: The results showed no statistically significant difference between the measurements on the first and second occasions. The absolute mean distance between the new composite model and the laser model ranged between 0.11 mm to 0.20 mm. Conclusion: The dentition of the CBCT can be accurately replaced with the dental image captured by the intra-oral scanner to create a composite model. This method will improve the accuracy of orthognathic surgical prediction planning, with the final goal of the fabrication of a physical occlusal wafer without to guide orthognathic surgery and eliminate the need for dental impression.Keywords: orthognathic surgery, superimposition, models, cone beam computed tomography
Procedia PDF Downloads 195811 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.Keywords: wavelet transform, computational error, computational duration, strong ground motion data
Procedia PDF Downloads 376810 Acoustic Analysis of Ball Bearings to Identify Localised Race Defect
Authors: M. Solairaju, Nithin J. Thomas, S. Ganesan
Abstract:
Each and every rotating part of a machine element consists of bearings within its structure. In particular, the rolling element bearings such as cylindrical roller bearing and deep groove ball bearings are frequently used. Improper handling, excessive loading, improper lubrication and sealing cause bearing damage. Hence health monitoring of bearings is an important aspect for radiation pattern of bearing vibration is computed using the dipole model. Sound pressure level for defect-free and race defect the prolonged life of machinery and auto motives. This paper presents modeling and analysis of Acoustic response of deep groove ball bearing with localized race defects. Most of the ball bearings, especially in machine tool spindles and high-speed applications are pre-loaded along an axial direction. The present study is carried out with axial preload. Based on the vibration response, the orbit motion of the inner race is studied, and it was found that the oscillation takes place predominantly in the axial direction. Simplified acoustic is estimated. Acoustic response shows a better indication in identifying the defective bearing. The computed sound signal is visualized in diagrammatic representation using Symmetrised Dot Pattern (SDP). SDP gives better visual distinction between the defective and defect-free bearingKeywords: bearing, dipole, noise, sound
Procedia PDF Downloads 293809 Experiment on Artificial Recharge of Groundwater Implemented Project: Effect on the Infiltration Velocity by Vegetation Mulch
Authors: Cheh-Shyh Ting, Jiin-Liang Lin
Abstract:
This study was conducted at the Wanglung Farm in Pingtung County to test the groundwater seepage influences on the implemented project for artificial groundwater recharge. The study was divided into three phases. The first phase, conducted on natural groundwater that was recharged through the local climate and growing conditions, observed the natural form of vegetation species. The original plants were flooded, and after 60 days it was observed that of the original plants only Goosegrass (Eleusine indica) and Black heart (Polygonum lapathifolium Linn.) remained. Direct infiltration tests were carried out, and calculations for the effect of vegetation on infiltration velocity of the recharge pool were noted. The second phase was an indoor test. Bahia grass and wild amaranth were selected as vegetation roots. After growth, the distribution of different grassroots was observed in order to facilitate a comparison permeability coefficient calculated by the amount of penetration and to explore the relationship between density and the efficiency to groundwater recharge. The third phase was the root tomography analysis, further observation of the development of plant roots using computed tomography technology. Computed Tomography, also known as (CT), is a diagnostic imaging examination, normally used in the medical field. In the first phase of the feasibility study, most non-aquatic plants wilted and died within seven days. In seven days, the remaining plants were used for experimental infiltration analysis. Results showed that in eight hours of infiltration test, Eleusine indica stems averaged 0.466 m/day and wild amaranth averaged 0.014 m/day. The second phase of the experiment was conducted on the remains of the plant a week in it had died and rotted, and the infiltration experiment was performed under these conditions. The results showed eight hours in end of the infiltration test, Eleusine indica stems averaged 0.033 m/day, and wild amaranth averaged 0.098 m/day. Non-aquatic plants died within two weeks, and their rotted remains clogged the pores of bottom soil particles, causing obstruction of recharge pool infiltration. Experiment results showed that eight hours in the test the average infiltration velocity for Eleusine indica stems was 0.0229 m/day and wild amaranth averaged 0.0117 m/day. Since the rotted roots of the plants blocked the pores of the soil in the recharge pool, which resulted in the obstruction of the artificial infiltration pond and showed an immediate impact on recharge efficiency. In order to observe the development of plant roots, the third phase used computed tomography imaging. Iodine developer was injected into the Black heart, allowing its cross-sectional images to be shown on CT and to be used to observe root development.Keywords: artificial recharge of groundwater, computed tomography, infiltration velocity, vegetation root system
Procedia PDF Downloads 308808 Recognizing Customer Preferences Using Review Documents: A Hybrid Text and Data Mining Approach
Authors: Oshin Anand, Atanu Rakshit
Abstract:
The vast increment in the e-commerce ventures makes this area a prominent research stream. Besides several quantified parameters, the textual content of reviews is a storehouse of many information that can educate companies and help them earn profit. This study is an attempt in this direction. The article attempts to categorize data based on a computed metric that quantifies the influencing capacity of reviews rendering two categories of high and low influential reviews. Further, each of these document is studied to conclude several product feature categories. Each of these categories along with the computed metric is converted to linguistic identifiers and are used in an association mining model. The article makes a novel attempt to combine feature attraction with quantified metric to categorize review text and finally provide frequent patterns that depict customer preferences. Frequent mentions in a highly influential score depict customer likes or preferred features in the product whereas prominent pattern in low influencing reviews highlights what is not important for customers. This is achieved using a hybrid approach of text mining for feature and term extraction, sentiment analysis, multicriteria decision-making technique and association mining model.Keywords: association mining, customer preference, frequent pattern, online reviews, text mining
Procedia PDF Downloads 387