Search results for: slice bispectrogram
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 69

Search results for: slice bispectrogram

69 Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network

Authors: Katsumi Hirata

Abstract:

Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed.

Keywords: environmental sound, bispectrum, spectrogram, slice bispectrogram, convolutional neural network

Procedia PDF Downloads 94
68 Heterogenous Dimensional Super Resolution of 3D CT Scans Using Transformers

Authors: Helen Zhang

Abstract:

Accurate segmentation of the airways from CT scans is crucial for early diagnosis of lung cancer. However, the existing airway segmentation algorithms often rely on thin-slice CT scans, which can be inconvenient and costly. This paper presents a set of machine learning-based 3D super-resolution algorithms along heterogeneous dimensions to improve the resolution of thicker CT scans to reduce the reliance on thin-slice scans. To evaluate the efficacy of the super-resolution algorithms, quantitative assessments using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural SIMilarity index) were performed. The impact of super-resolution on airway segmentation accuracy is also studied. The proposed approach has the potential to make airway segmentation more accessible and affordable, thereby facilitating early diagnosis and treatment of lung cancer.

Keywords: 3D super-resolution, airway segmentation, thin-slice CT scans, machine learning

Procedia PDF Downloads 79
67 Changes in Textural Properties of Zucchini Slices with Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of zucchini slices under effects of frying conditions were investigated. Frying time and temperature were interested process variables like slice thickness. Slice thickness was studied at three levels (2, 3, and 4 mm). Frying process was performed at two temperature levels (160 and 180 °C) and each for five different process time periods (1, 2, 3, 5, 8 and 10 min). As frying oil sunflower oil was used. Before frying zucchini slices were thermally processes in boiling water for 90 seconds to inactivate at least 80% of plant’s enzymes. After thermal process, zucchini slices were fried in an industrial fryer at specified temperature and time pairs. Fried slices were subjected to textural profile analysis (TPA) to determine textural properties. In this extent hardness, elasticity, cohesion, chewiness, firmness values of slices were figured out. Statistical analysis indicated significant variations in the studied textural properties with process conditions (p < 0.05). Hardness and firmness were determined for fresh and thermally processes zucchini slices to compare each others. Differences in hardness and firmness of fresh, thermally processed and fried slices were found to be significant (p < 0.05). This project (113R015) has been supported by TUBITAK.

Keywords: sunflower oil, hardness, firmness, slice thickness, frying temperature, frying time

Procedia PDF Downloads 414
66 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 132
65 Induced Chemistry for Dissociative Electron Attachment to Focused Electron Beam Induced Deposition Precursors Based on Ti, Si and Fe Metal Elements

Authors: Maria Pintea, Nigel Mason

Abstract:

Induced chemistry is one of the newest pathways in the nanotechnology field with applications in the focused electron beam induced processes for deposition of nm scale structures. Si(OPr)₄ and Ti(OEt)₄ are two of the precursors that have not been so extensively researched, though highly sought for semiconductor and medical applications fields, the two compounds make good candidates for FEBIP and are the subject of velocity slice map imaging analysis for deposition purposes, offering information on kinetic energies, fragmentation channels, and angular distributions. The velocity slice map imaging technique is a method used for the characterization of molecular dynamics of the molecule and the fragmentation channels as a result of induced chemistry. To support the gas-phase analysis, Meso-Bio-Nano simulations of irradiation dynamics studies are employed with final results on Fe(CO)₅ deposited on various substrates. The software is capable of running large scale simulations for complex biomolecular, nano- and mesoscopic systems with applications to thermos-mechanical DNA damage, complex materials, gases, nanoparticles for cancer research and deposition applications for nanotechnology, using a large library of classical potentials, many-body force fields, molecular force fields involved in the classical molecular dynamics.

Keywords: focused electron beam induced deposition, FEBID, induced chemistry, molecular dynamics, velocity map slice imaging

Procedia PDF Downloads 77
64 Reconfigurable Efficient IIR Filter Design Using MAC Algorithm

Authors: Rajesh Mehra

Abstract:

In this paper an IIR filter has been designed and simulated on an FPGA. The implementation is based on MAC algorithm which uses multiply-and-accumulate operations IIR filter design implementation. Parallel Pipelined structure is used to implement the proposed IIR Filter taking optimal advantage of the look up table of the FPGA device. The designed filter has been synthesized on DSP slice based FPGA to perform multiplier function of MAC unit. The DSP slices are useful to enhance the speed performance. The developed IIR filter is designed and simulated with MATLAB and synthesized with Xilinx Synthesis Tool (XST), and implemented on Virtex 5 and Spartan 3 ADSP FPGA devices. The IIR filter implemented on Virtex 5 FPGA can operate at an estimated frequency of 81.5 MHz as compared to 40.5 MHz in case of Spartan 3 ADSP FPGA. The Virtex 5 based implementation also consumes less slices and slice flip flops of target FPGA in comparison to Spartan 3 ADSP based implementation to provide cost effective solution for signal processing applications.

Keywords: butterworth, DSP, IIR, MAC, FPGA

Procedia PDF Downloads 326
63 Code Refactoring Using Slice-Based Cohesion Metrics and AOP

Authors: Jagannath Singh, Durga Prasad Mohapatra

Abstract:

Software refactoring is very essential for maintaining the software quality. It is an usual practice that we first design the software and then go for coding. But after coding is completed, if the requirement changes slightly or our expected output is not achieved, then we change the codes. For each small code change, we cannot change the design. In course of time, due to these small changes made to the code, the software design decays. Software refactoring is used to restructure the code in order to improve the design and quality of the software. In this paper, we propose an approach for performing code refactoring. We use slice-based cohesion metrics to identify the target methods which requires refactoring. After identifying the target methods, we use program slicing to divide the target method into two parts. Finally, we have used the concepts of Aspects to adjust the code structure so that the external behaviour of the original module does not change.

Keywords: software refactoring, program slicing, AOP, cohesion metrics, code restructure, AspectJ

Procedia PDF Downloads 474
62 FPGA Based IIR Filter Design Using MAC Algorithm

Authors: Rajesh Mehra, Bharti Thakur

Abstract:

In this paper, an IIR filter has been designed and simulated on an FPGA. The implementation is based on MAC algorithm which uses multiply-and-accumulate operations IIR filter design implementation. Parallel Pipelined structure is used to implement the proposed IIR Filter taking optimal advantage of the look up table of the FPGA device. The designed filter has been synthesized on DSP slice based FPGA to perform multiplier function of MAC unit. The DSP slices are useful to enhance the speed performance. The developed IIR filter is designed and simulated with Matlab and synthesized with Xilinx Synthesis Tool (XST), and implemented on Virtex 5 and Spartan 3 ADSP FPGA devices. The IIR filter implemented on Virtex 5 FPGA can operate at an estimated frequency of 81.5 MHz as compared to 40.5 MHz in case of Spartan 3 ADSP FPGA. The Virtex 5 based implementation also consumes less slices and slice flip flops of target FPGA in comparison to Spartan 3 ADSP based implementation to provide cost effective solution for signal processing applications.

Keywords: Butterworth filter, DSP, IIR, MAC, FPGA

Procedia PDF Downloads 354
61 Drying Kinetics, Energy Requirement, Bioactive Composition, and Mathematical Modeling of Allium Cepa Slices

Authors: Felix U. Asoiro, Meshack I. Simeon, Chinenye E. Azuka, Harami Solomon, Chukwuemeka J. Ohagwu

Abstract:

The drying kinetics, specific energy consumed (SEC), effective moisture diffusivity (EMD), flavonoid, phenolic, and vitamin C contents of onion slices dried under convective oven drying (COD) were compared with microwave drying (MD). Drying was performed with onion slice thicknesses of 2, 4, 6, and 8 mm; air drying temperatures of 60, 80, and 100°C for COD, and microwave power of 450 W for MD. A decrease in slice thickness and an increase in drying air temperature led to a drop in the drying time. As thickness increased from 2 – 8 mm, EMD rose from 1.1-4.35 x 10⁻⁸ at 60°C, 1.1-5.6 x 10⁻⁸ at 80°C, and 1.25-6.12 x 10⁻⁸ at 100°C with MD treatments yielding the highest mean value (6.65 x 10⁻⁸ m² s⁻¹) at 8 mm. Maximum SEC for onion slices in COD was 238.27 kWh/kg H₂O (2 mm thickness), and the minimum was 39.4 kWh/kg H₂O (8 mm thickness) whereas maximum during MD was 25.33 kWh/kg H₂O (8 mm thickness) and minimum, 18.7 kWh/kg H₂O (2 mm thickness). MD treatment gave a significant (p 0.05) increase in the flavonoid (39.42 – 64.4%), phenolic (38.0 – 46.84%), and vitamin C (3.7 – 4.23 mg 100 g⁻¹) contents, while COD treatment at 60°C and 100°C had positive effects on only vitamin C and phenolic contents, respectively. In comparison, the Weibull model gave the overall best fit (highest R²=0.999; lowest SSE=0.0002, RSME=0.0123, and χ²= 0.0004) when drying 2 mm onion slices at 100°C.

Keywords: allium cepa, drying kinetics, specific energy consumption, flavonoid, vitamin C, microwave oven drying

Procedia PDF Downloads 101
60 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 75
59 Generalized Limit Equilibrium Solution for the Lateral Pile Capacity Problem

Authors: Tomer Gans-Or, Shmulik Pinkert

Abstract:

The determination of lateral pile capacity per unit length is a key aspect in geotechnical engineering. Traditional approaches for assessing piles lateral capacity in cohesive soils involve the application of upper-bound and lower-bound plasticity theorems. However, a comprehensive solution encompassing the entire spectrum of soil strength parameters, particularly in frictional soils with or without cohesion, is still lacking. This research introduces an innovative implementation of the slice method limit equilibrium solution for lateral capacity assessment. For any given numerical discretization of the soil's domain around the pile, the lateral capacity evaluation is based on mobilized strength concept. The critical failure geometry is then found by a unique optimization procedure which includes both factor of safety minimization and geometrical optimization. The robustness of this suggested methodology is that the solution is independent of any predefined assumptions. Validation of the solution is accomplished through a comparison with established plasticity solutions for cohesive soils. Furthermore, the study demonstrates the applicability of the limit equilibrium method to address unresolved cases related to frictional and cohesive-frictional soils. Beyond providing capacity values, the method enables the utilization of the mobilized strength concept to generate safety-factor distributions for scenarios representing pre-failure states.

Keywords: lateral pile capacity, slice method, limit equilibrium, mobilized strength

Procedia PDF Downloads 21
58 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 95
57 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 227
56 Deciphering the Action of Neuraminidase in Glioblastoma Models

Authors: Nathalie Baeza-Kallee, Raphaël Bergès, Victoria Hein, Stéphanie Cabaret, Jeremy Garcia, Abigaëlle Gros, Emeline Tabouret, Aurélie Tchoghandjian, Carole Colin, Dominique Figarella-Branger

Abstract:

Glioblastoma (GBM) contains cancer stem cells that are resistant to treatment. GBM cancer stem cell expresses glycolipids recognized by the A2B5 antibody. A2B5, induced by the enzyme ST8 alpha-N-acetyl-neuraminide alpha-2,8-sialyl transferase 3 (ST8Sia3), plays a crucial role in the proliferation, migration, clonogenicity, and tumorigenesis of GBM cancer stem cells. Our aim was to characterize the resulting effects of neuraminidase that remove A2B5 in order to target GBM cancer stem cells. To this end, we set up a GBM organotypic slice model; quantified A2B5 expression by flow cytometry in U87-MG, U87-ST8Sia3, and GBM cancer stem cell lines, treated or not by neuraminidase; performed RNAseq and DNA methylation profiling; and analyzed the ganglioside expression by liquid chromatography-mass spectrometry in these cell lines, treated or not with neuraminidase. Results demonstrated that neuraminidase decreased A2B5 expression, tumor size, and regrowth after surgical removal in the organotypic slice model but did not induce a distinct transcriptomic or epigenetic signature in GBM CSC lines. RNAseq analysis revealed that OLIG2, CHI3L1, TIMP3, TNFAIP2, and TNFAIP6 transcripts were significantly overexpressed in U87-ST8Sia3 compared to U87-MG. RT-qPCR confirmed these results and demonstrated that neuraminidase decreased gene expression in GBM cancer stem cell lines. Moreover, neuraminidase drastically reduced ganglioside expression in GBM cancer stem cell lines. Neuraminidase, by its pleiotropic action, is an attractive local treatment against GBM.

Keywords: cancer stem cell, ganglioside, glioblastoma, targeted treatment

Procedia PDF Downloads 43
55 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 224
54 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 108
53 Towards Security in Virtualization of SDN

Authors: Wanqing You, Kai Qian, Xi He, Ying Qian

Abstract:

In this paper, the potential security issues brought by the virtualization of a Software Defined Networks (SDN) would be analyzed. The virtualization of SDN is achieved by FlowVisor (FV). With FV, a physical network is divided into multiple isolated logical networks while the underlying resources are still shared by different slices (isolated logical networks). However, along with the benefits brought by network virtualization, it also presents some issues regarding security. By examining security issues existing in an OpenFlow network, which uses FlowVisor to slice it into multiple virtual networks, we hope we can get some significant results and also can get further discussions among the security of SDN virtualization.

Keywords: SDN, network, virtualization, security

Procedia PDF Downloads 388
52 Calculating Ventricle’s Area Based on Clinical Dementia Rating Values on Coronal MRI Image

Authors: Retno Supriyanti, Ays Rahmadian Subhi, Yogi Ramadhani, Haris B. Widodo

Abstract:

Alzheimer is one type of disease in the elderly that may occur in the world. The severity of the Alzheimer can be measured using a scale called Clinical Dementia Rating (CDR) based on a doctor's diagnosis of the patient's condition. Currently, diagnosis of Alzheimer often uses MRI machine, to know the condition of part of the brain called Hippocampus and Ventricle. MRI image itself consists of 3 slices, namely Coronal, Sagittal and Axial. In this paper, we discussed the measurement of the area of the ventricle especially in the Coronal slice based on the severity level referring to the CDR value. We use Active Contour method to segment the ventricle’s region, therefore that ventricle’s area can be calculated automatically. The results show that this method can be used for further development in the automatic diagnosis of Alzheimer.

Keywords: Alzheimer, CDR, coronal, ventricle, active contour

Procedia PDF Downloads 238
51 Use of Digital Forensics for Sex Determination by Nasal Index

Authors: Ashwini Kumar, Vinod Nayak, Shankar M. Bakkannavar

Abstract:

The identification of humans is important in forensic investigations not only in living but also in dead, especially in cases of mass disorders. The procedure followed in dead known as post-mortem identification is a challenging task for the forensic pathologist. However, it is mandatory in terms of the law to fulfill the social norms. Many times, due to mutilation of body parts, the normal methods of identification using skeletal remains cannot be used in the process of identification. In such cases, the intact components of the skeletal remains or bony parts play an important role in identification. In these situations, digital forensics can come to our rescue. The authors hereby made a study for determination of sex based on nasal index by using (Big Bore 16 Slice) Multidetector Computed Tomography 2D Scans. The results are represented as a poster.

Keywords: sex determination, multidetector computed tomography, nasal index, digital forensic

Procedia PDF Downloads 365
50 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography

Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld

Abstract:

With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.

Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry

Procedia PDF Downloads 279
49 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 493
48 Key Technologies and Evolution Strategies for Computing Force Bearer Network

Authors: Zhaojunfeng

Abstract:

Driven by the national policy of "East Data and Western Calculation", the computing first network will attract a new wave of development. As the foundation of the development of the computing first network, the computing force bearer network has become the key direction of technology research and development in the industry. This article will analyze typical computing force application scenarios and bearing requirements and sort out the SLA indicators of computing force applications. On this basis, this article carries out research and discussion on the key technologies of computing force bearer network in a slice packet network, and finally, gives evolution policy for SPN computing force bearer network to support the development of SPN computing force bearer network technology and network deployment.

Keywords: component-computing force bearing, bearing requirements of computing force application, dual-SLA indicators for computing force applications, SRv6, evolution strategies

Procedia PDF Downloads 99
47 Multivariate Analysis of Spectroscopic Data for Agriculture Applications

Authors: Asmaa M. Hussein, Amr Wassal, Ahmed Farouk Al-Sadek, A. F. Abd El-Rahman

Abstract:

In this study, a multivariate analysis of potato spectroscopic data was presented to detect the presence of brown rot disease or not. Near-Infrared (NIR) spectroscopy (1,350-2,500 nm) combined with multivariate analysis was used as a rapid, non-destructive technique for the detection of brown rot disease in potatoes. Spectral measurements were performed in 565 samples, which were chosen randomly at the infection place in the potato slice. In this study, 254 infected and 311 uninfected (brown rot-free) samples were analyzed using different advanced statistical analysis techniques. The discrimination performance of different multivariate analysis techniques, including classification, pre-processing, and dimension reduction, were compared. Applying a random forest algorithm classifier with different pre-processing techniques to raw spectra had the best performance as the total classification accuracy of 98.7% was achieved in discriminating infected potatoes from control.

Keywords: Brown rot disease, NIR spectroscopy, potato, random forest

Procedia PDF Downloads 154
46 Worm Gearing Design Improvement by Considering Varying Mesh Stiffness

Authors: A. H. Elkholy, A. H. Falah

Abstract:

A new approach has been developed to estimate the load share and stress distribution of worm gear sets. The approach is based upon considering the instantaneous tooth meshing stiffness where the worm gear drive was modelled as a series of spur gear slices, and each slice was analyzed separately using the well established formulae of spur gears. By combining the results obtained for all slices, the entire envolute worm gear set loading and stressing was obtained. The geometric modelling method presented, allows tooth elastic deformation and tooth root stresses of worm gear drives under different load conditions to be investigated. On the basis of the method introduced in this study, the instantaneous meshing stiffness and load share were obtained. In comparison with existing methods, this approach has both good analysis accuracy and less computing time.

Keywords: gear, load/stress distribution, worm, wheel, tooth stiffness, contact line

Procedia PDF Downloads 315
45 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar

Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh

Abstract:

Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.

Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring

Procedia PDF Downloads 142
44 The Consequences of Vibrations in Machining

Authors: Boughedaoui Rachid, Belaidi Idir, Ouali Mohamed

Abstract:

The formatting by removal of material remains an indispensable means for obtaining different forms of pieces. The objective of this work is to study the influence of parameters of the vibratory regime of the system PTM 'Piece-Tool-Machine, in the case of the machining of the thin pieces on the surface finish. As a first step, an analytical study of essential dynamic models 2D slice will be presented. The stability lobes will be thus obtained. In a second step, a characterization of PTM system will be realized. This system will be instrumented with accelerometric sensors but also a laser vibrometer so as to have the information closer to the cutting area. Dynamometers three components will be used for the analysis of cutting forces. Surface states will be measured and the condition of the cutting edge will be visualized thanks to a binocular microscope coupled to a data acquisition system. This information will allow quantifying the influence of chatter on the dimensional quality of the parts. From lobes stabilities previously determined experimental validation allow for the development a method for detecting of the phenomenon of chatter and so an approach will be proposed.

Keywords: chatter, dynamic, milling, lobe stability

Procedia PDF Downloads 336
43 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 42
42 Optimization of Multiplier Extraction Digital Filter On FPGA

Authors: Shiksha Jain, Ramesh Mishra

Abstract:

One of the most widely used complex signals processing operation is filtering. The most important FIR digital filter are widely used in DSP for filtering to alter the spectrum according to some given specifications. Power consumption and Area complexity in the algorithm of Finite Impulse Response (FIR) filter is mainly caused by multipliers. So we present a multiplier less technique (DA technique). In this technique, precomputed value of inner product is stored in LUT. Which are further added and shifted with number of iterations equal to the precision of input sample. But the exponential growth of LUT with the order of FIR filter, in this basic structure, makes it prohibitive for many applications. The significant area and power reduction over traditional Distributed Arithmetic (DA) structure is presented in this paper, by the use of slicing of LUT to the desired length. An architecture of 16 tap FIR filter is presented, with different length of slice of LUT. The result of FIR Filter implementation on Xilinx ISE synthesis tool (XST) vertex-4 FPGA Tool by using proposed method shows the increase of the maximum frequency, the decrease of the resources as usage saving in area with more number of slices and the reduction dynamic power.

Keywords: multiplier less technique, linear phase symmetric FIR filter, FPGA tool, look up table

Procedia PDF Downloads 361
41 Pool Fire Tests of Dual Purpose Casks for Spent Nuclear Fuel

Authors: K. S. Bang, S. H. Yu, J. C. Lee, K. S. Seo, S. H. Lee

Abstract:

Dual purpose casks are used for storage and transport of spent nuclear fuel assemblies. Therefore, they satisfy the requirements prescribed in the Korea NSSC Act 2013-27, the IAEA Safety Standard Series No. SSR-6, and US 10 CFR Part 71. These regulatory guidelines classify the dual purpose cask as a Type B package, and state that a Type B package must be able to withstand a temperature of 800°C for a period of 30 min. Therefore, a fire test was conducted using a one-sixth slice of a real cask to estimate the thermal integrity of the dual purpose cask at a temperature of 800°C. The neutron shield reached a maximum temperature of 183°C, which indicates that dual purpose cask was properly insulated from the heat of the flames. The temperature rise of the basket during the fire test was 29°C. Therefore, the integrity of a spent nuclear fuel is estimated to be maintained. The temperature was lower when a cooling pin was installed. The neutron shielding was therefore protected adequately by cooling pin. As a result, the thermal integrity of the dual purpose cask was maintained and the cask is judged to be sufficiently safe for temperatures under 800°C.

Keywords: dual purpose cask, spent nuclear fuel, pool fire test, integrity

Procedia PDF Downloads 432
40 Design Improvement of Worm Gearing for Better Energy Utilization

Authors: Ahmed Elkholy

Abstract:

Most power transmission cases use gearing in general, and worm gearing, in particular for energy utilization. Therefore, designing gears for minimum weight and maximum power transmission is the main target of this study. In this regard, a new approach has been developed to estimate the load share and stress distribution of worm gear sets. The approach is based upon considering the instantaneous tooth meshing stiffness where the worm gear drive was modelled as a series of spur gear slices, and each slice was analyzed separately using a well-established criteria. By combining the results obtained for all slices, the entire worm gear set loading and stressing was determined. The geometric modelling method presented, allows tooth elastic deformation and tooth root stresses of worm gear drives under different load conditions to be investigated. On the basis of the method introduced in this study, the instantaneous meshing stiffness and load share were obtained. In comparison with existing methods, this approach has both good analytical accuracy and less computing time.

Keywords: gear, load/stress distribution, worm, wheel, tooth stiffness, contact line

Procedia PDF Downloads 392