Search results for: lung computed tomography (CT) images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3758

Search results for: lung computed tomography (CT) images

2948 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 137
2947 Improvement of Cross Range Resolution in Through Wall Radar Imaging Using Bilateral Backprojection

Authors: Rashmi Yadawad, Disha Narayanan, Ravi Gautam

Abstract:

Through Wall Radar Imaging is gaining increasing importance now a days in the field of Defense and one of the most important criteria that forms the basis for the image quality obtained is the Cross-Range resolution of the image. In this research paper, the Bilateral Back projection algorithm has been implemented for Through Wall Radar Imaging. The sole purpose is to enhance the resolution in the cross range direction of the obtained Back projection image. Synthetic Data is generated for two targets which are placed at various locations in a room of dimensions 8 m by 6m. Two algorithms namely, simple back projection and Bilateral Back projection have been implemented, images are obtained and the obtained images are compared. Numerical simulations have been coded in MATLAB and experimental results of the two algorithms have been shown. Based on the comparison between the two images, it can be clearly seen that the ringing effect and chess board effect have been heavily reduced in the bilaterally back projected image and hence promising results are obtained giving a relatively sharper image with relatively well defined edges.

Keywords: through wall radar imaging, bilateral back projection, cross range resolution, synthetic data

Procedia PDF Downloads 348
2946 Error Analysis of Wavelet-Based Image Steganograhy Scheme

Authors: Geeta Kasana, Kulbir Singh, Satvinder Singh

Abstract:

In this paper, a steganographic scheme for digital images using Integer Wavelet Transform (IWT) is proposed. The cover image is decomposed into wavelet sub bands using IWT. Each of the subband is divided into blocks of equal size and secret data is embedded into the largest and smallest pixel values of each block of the subband. Visual quality of stego images is acceptable as PSNR between cover image and stego is above 40 dB, imperceptibility is maintained. Experimental results show better tradeoff between capacity and visual perceptivity compared to the existing algorithms. Maximum possible error analysis is evaluated for each of the wavelet subbands of an image.

Keywords: DWT, IWT, MSE, PSNR

Procedia PDF Downloads 504
2945 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 98
2944 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 266
2943 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method

Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption

Procedia PDF Downloads 519
2942 Iris Cancer Detection System Using Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.

Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera

Procedia PDF Downloads 504
2941 Adjustable Aperture with Liquid Crystal for Real-Time Range Sensor

Authors: Yumee Kim, Seung-Guk Hyeon, Kukjin Chun

Abstract:

An adjustable aperture using a liquid crystal is proposed for real-time range detection and obtaining images simultaneously. The adjustable aperture operates as two types of aperture stops which can create two different Depth of Field images. By analyzing these two images, the distance can be extracted from camera to object. Initially, the aperture stop has large size with zero voltage. When the input voltage is applied, the aperture stop transfer to smaller size by orientational transition of liquid crystal molecules in the device. The diameter of aperture stop is 1.94mm and 1.06mm. The proposed device has low driving voltage of 7.0V and fast response time of 6.22m. Compact size aperture of 6×6×1.1 mm3 is assembled in conventional camera which contain 1/3” HD image sensor and focal length of 3.3mm that can be used in autonomous. The measured range was up to 5m. The adjustable aperture has high stability due to no mechanically moving parts. This range sensor can be applied to the various field of 3D depth map application which is the Advanced Driving Assistance System (ADAS), drones and manufacturing machine.

Keywords: adjustable aperture, dual aperture, liquid crystal, ranging and imaging, ADAS, range sensor

Procedia PDF Downloads 381
2940 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 278
2939 Moving Images and Re-Articulations of Self-Identity: Young People's Experiences of Viewing Representations Disability in Films

Authors: Alison Wilde, Stephen Millett

Abstract:

The cultural value of disabled people has largely been overlooked within forms of media and cultural analysis until the 1980s, when disabled people and disability studies highlighted the cultural misrecognition of disabled people and called for improved forms of cultural recognition and representation. Despite an increase in cultural analysis of representations of disabled people, much has been assumed about how images are read, and little work has been done on the value attributed to disabled people by media audiences and the viewing interests and encounters of film audiences. In particular, there has been little work on film reception, or on the way that young people interpret images of disability. We set out to understand some of the ways that young people read disability imagery, by showing small groups of young people different types of film featuring impairments, chosen from three different eras in film. These were Freaks, Rear Window (remake), and Finding Nemo. The discussions after these films allowed them to explore their own experiences of disability alongside the evolution of cultural representations; in so doing they discussed significant themes of cultural value and reflected on their own identities, e.g. in/dependency, autonomy, and competency and the ways these intersected with self-identity, and attitudes to disabled people.

Keywords: film, audience, identity, disability

Procedia PDF Downloads 420
2938 The Use of Remote Sensing in the Study of Vegetation Jebel Boutaleb, Setif, Algeria

Authors: Khaled Missaoui, Amina Beldjazia, Rachid Gharzouli, Yamna Djellouli

Abstract:

Optical remote sensing makes use of visible, near infrared and short-wave infrared sensors to form images of the earth's surface by detecting the solar radiation reflected from targets on the ground. Different materials reflect and absorb differently at different wavelengths. Thus, the targets can be differentiated by their spectral reflectance signatures in the remotely sensed images. In this work, we are interested to study the distribution of vegetation in the massif forest of Boutaleb (North East of Algeria) which suffered between 1998 and 1999 very large fires. In this case, we use remote sensing with Landsat images from two dates (1984 and 2000) to see the results of these fires. Vegetation has a unique spectral signature which enables it to be distinguished readily from other types of land cover in an optical/near-infrared image. Normalized Difference Vegetation Index (NDVI) is calculated with ENVI 4.7 from Band 3 and 4. The results showed a very important floristic diversity in this forest. The comparison of NDVI from the two dates confirms that there is a decrease of the density of vegetation in this area due to repeated fires.

Keywords: remote sensing, boutaleb, diversity, forest

Procedia PDF Downloads 560
2937 Comparative Analysis of Edge Detection Techniques for Extracting Characters

Authors: Rana Gill, Chandandeep Kaur

Abstract:

Segmentation of images can be implemented using different fundamental algorithms like edge detection (discontinuity based segmentation), region growing (similarity based segmentation), iterative thresholding method. A comprehensive literature review relevant to the study gives description of different techniques for vehicle number plate detection and edge detection techniques widely used on different types of images. This research work is based on edge detection techniques and calculating threshold on the basis of five edge operators. Five operators used are Prewitt, Roberts, Sobel, LoG and Canny. Segmentation of characters present in different type of images like vehicle number plate, name plate of house and characters on different sign boards are selected as a case study in this work. The proposed methodology has seven stages. The proposed system has been implemented using MATLAB R2010a. Comparison of all the five operators has been done on the basis of their performance. From the results it is found that Canny operators produce best results among the used operators and performance of different edge operators in decreasing order is: Canny>Log>Sobel>Prewitt>Roberts.

Keywords: segmentation, edge detection, text, extracting characters

Procedia PDF Downloads 426
2936 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 262
2935 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 119
2934 Liver and Liver Lesion Segmentation From Abdominal CT Scans

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The interpretation of medical images benefits from anatomical and physiological priors to optimize computer- aided diagnosis applications. Segmentation of liver and liver lesion is regarded as a major primary step in computer aided diagnosis of liver diseases. Precise liver segmentation in abdominal CT images is one of the most important steps for the computer-aided diagnosis of liver pathology. In this papers, a semi- automated method for medical image data is presented for the liver and liver lesion segmentation data using mathematical morphology. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological filters to extract the liver. The second step consists to detect the liver lesion. In this task; we proposed a new method developed for the semi-automatic segmentation of the liver and hepatic lesions. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to improve the quality of the original image and image gradient by applying the spatial filter followed by the morphological filters. The second step consists to calculate the internal and external markers of the liver and hepatic lesions. Thereafter we proceed to the liver and hepatic lesions segmentation by the watershed transform controlled by markers. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm

Keywords: anisotropic diffusion filter, CT images, hepatic lesion segmentation, Liver segmentation, morphological filter, the watershed algorithm

Procedia PDF Downloads 451
2933 Diversity Indices as a Tool for Evaluating Quality of Water Ways

Authors: Khadra Ahmed, Khaled Kheireldin

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: planktons, diversity indices, water quality index, water ways

Procedia PDF Downloads 519
2932 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 402
2931 The Importance of the Fluctuation in Blood Sugar and Blood Pressure of Insulin-Dependent Diabetic Patients with Chronic Kidney Disease

Authors: Hitoshi Minakuchi, Izumi Takei, Shu Wakino, Koichi Hayashi, Hiroshi Itoh

Abstract:

Objectives: Among type 2 diabetics, patients with CKD(chronic kidney disease), insulin resistance, impaired glyconeogenesis in kidney and reduced degradation of insulin are recognized, and we observed different fluctuational patterns of blood sugar between CKD patients and non-CKD patients. On the other hand, non-dipper type blood pressure change is the risk of organ damage and mortality. We performed cross-sectional study to elucidate the characteristic of the fluctuation of blood glucose and blood pressure at insulin-treated diabetic patients with chronic kidney disease. Methods: From March 2011 to April 2013, at the Ichikawa General Hospital of Tokyo Dental College, we recruited 20 outpatients. All participants are insulin-treated type 2 diabetes with CKD. We collected serum samples, urine samples for several hormone measurements, and performed CGMS(Continuous glucose measurement system), ABPM (ambulatory blood pressure monitoring), brain computed tomography, carotid artery thickness, ankle brachial index, PWV, CVR-R, and analyzed these data statistically. Results: Among all 20 participants, hypoglycemia was decided blood glucose 70mg/dl by CGMS of 9 participants (45.0%). The event of hypoglycemia was recognized lower eGFR (29.8±6.2ml/min:41.3±8.5ml/min, P<0.05), lower HbA1c (6.44±0.57%:7.53±0.49%), higher PWV (1858±97.3cm/s:1665±109.2cm/s), higher serum glucagon (194.2±34.8pg/ml:117.0±37.1pg/ml), higher free cortisol of urine (53.8±12.8μg/day:34.8±7.1μg/day), and higher metanephrin of urine (0.162±0.031mg/day:0.076±0.029mg/day). Non-dipper type blood pressure change in ABPM was detected 8 among 9 participants with hypoglycemia (88.9%), 4 among 11 participants (36.4%) without hypoglycemia. Multiplex logistic-regression analysis revealed that the event of hypoglycemia is the independent factor of non-dipper type blood pressure change. Conclusions: Among insulin-treated type 2 diabetic patients with CKD, the events of hypoglycemia were frequently detected, and can associate with the organ derangements through the medium of non-dipper type blood pressure change.

Keywords: chronic kidney disease, hypoglycemia, non-dipper type blood pressure change, diabetic patients

Procedia PDF Downloads 415
2930 Segmentation Using Multi-Thresholded Sobel Images: Application to the Separation of Stuck Pollen Grains

Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie

Abstract:

Being able to identify biological particles such as spores, viruses, or pollens is important for health care professionals, as it allows for appropriate therapeutic management of patients. Optical microscopy is a technology widely used for the analysis of these types of microorganisms, because, compared to other types of microscopy, it is not expensive. The analysis of an optical microscope slide is a tedious and time-consuming task when done manually. However, using machine learning and computer vision, this process can be automated. The first step of an automated microscope slide image analysis process is segmentation. During this step, the biological particles are localized and extracted. Very often, the use of an automatic thresholding method is sufficient to locate and extract the particles. However, in some cases, the particles are not extracted individually because they are stuck to other biological elements. In this paper, we propose a stuck particles separation method based on the use of the Sobel operator and thresholding. We illustrate it by applying it to the separation of 813 images of adjacent pollen grains. The method correctly separated 95.4% of these images.

Keywords: image segmentation, stuck particles separation, Sobel operator, thresholding

Procedia PDF Downloads 131
2929 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki

Abstract:

A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.

Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT

Procedia PDF Downloads 296
2928 An Efficient Clustering Technique for Copy-Paste Attack Detection

Authors: N. Chaitawittanun, M. Munlin

Abstract:

Due to rapid advancement of powerful image processing software, digital images are easy to manipulate and modify by ordinary people. Lots of digital images are edited for a specific purpose and more difficult to distinguish form their original ones. We propose a clustering method to detect a copy-move image forgery of JPEG, BMP, TIFF, and PNG. The process starts with reducing the color of the photos. Then, we use the clustering technique to divide information of measuring data by Hausdorff Distance. The result shows that the purposed methods is capable of inspecting the image file and correctly identify the forgery.

Keywords: image detection, forgery image, copy-paste, attack detection

Procedia PDF Downloads 338
2927 Digital Image Forensics: Discovering the History of Digital Images

Authors: Gurinder Singh, Kulbir Singh

Abstract:

Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.

Keywords: Computer Forensics, Multimedia Forensics, Image Ballistics, Camera Source Identification, Forgery Detection

Procedia PDF Downloads 249
2926 Medical Image Compression by Region of Interest Based on DT-CWT Using Run-length Coding and Huffman Coding

Authors: Ali Seddiki, Mohamed Djebbouri, Driss Guerchi

Abstract:

Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. In some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to quality purpose compression in the region of interest of scintigraphic images based on dual tree complex wavelet transform (DT-CWT) using Run-Length coding (RLE) and Huffman coding (HC).

Keywords: DT-CWT, region of interest, run length coding, Scintigraphic images

Procedia PDF Downloads 282
2925 Dynamic Contrast-Enhanced Breast MRI Examinations: Clinical Use and Technical Challenges

Authors: Janet Wing-Chong Wai, Alex Chiu-Wing Lee, Hailey Hoi-Ching Tsang, Jeffrey Chiu, Kwok-Wing Tang

Abstract:

Background: Mammography has limited sensitivity and specificity though it is the primary imaging technique for detection of early breast cancer. Ultrasound imaging and contrast-enhanced MRI are useful adjunct tools to mammography. The advantage of breast MRI is high sensitivity for invasive breast cancer. Therefore, indications for and use of breast magnetic resonance imaging have increased over the past decade. Objectives: 1. Cases demonstration on different indications for breast MR imaging. 2. To review of the common artifacts and pitfalls in breast MR imaging. Materials and Methods: This is a retrospective study including all patients underwent dynamic contrast-enhanced breast MRI examination in our centre, performed from Jan 2011 to Dec 2017. The clinical data and radiological images were retrieved from the EPR (electronic patient record), RIS (Radiology Information System) and PACS (Picture Archiving and Communication System). Results and Discussion: Cases including (1) Screening of the contralateral breast in patient with a new breast malignancy (2) Breast augmentation with free injection of unknown foreign materials (3) Finding of axillary adenopathy with an unknown site of primary malignancy (4) Neo-adjuvant chemotherapy: before, during, and after chemotherapy to evaluate treatment response and extent of residual disease prior to operation. Relevant images will be included and illustrated in the presentation. As with other types of MR imaging, there are different artifacts and pitfalls that can potentially limit interpretation of the images. Because of the coils and software specific to breast MR imaging, there are some other technical considerations that are unique to MR imaging of breast regions. Case demonstration images will be available in presentation. Conclusion: Breast MR imaging is a highly sensitive and reasonably specific method for the detection of breast cancer. Adherent to appropriate clinical indications and technical optimization are crucial for achieving satisfactory images for interpretation.

Keywords: MRI, breast, clinical, cancer

Procedia PDF Downloads 243
2924 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 366
2923 Merging and Comparing Ontologies Generically

Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo

Abstract:

Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as categorical operations, relation algebras, and typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, labeled by (I), (C), (A), and (R), respectively, which are defined on an ontology merging system (D~M), where D is a non-empty set of the ontologies concerned, ~ is a binary relation on D modeling ontology aligning and M is a partial binary operation on D modeling ontology merging. Given an ontology repository, a finite set O ⊆ D, its merging closure Ô is the smallest set of ontologies, which contains the repository and is closed with respect to merging. If (I), (C), (A), and (R) are satisfied, then both D and Ô are partially ordered naturally by merging, Ô is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. We also show that the ontology merging system, given by ontology V -alignment pairs and pushouts, satisfies the properties: (I), (C), (A), and (R) so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.

Keywords: ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout

Procedia PDF Downloads 893
2922 An Experimental Study on the Influence of Brain-Break in the Classroom on the Physical Health and Academic Performance of Fourth Grade Students

Authors: Qian Mao, Xiaozan Wang, Jiarong Zhong, Xiaolin Zou

Abstract:

Introduction: As a result of the decline of students' physical health level and the increase of study pressure, students’ academic performance is not so good. Objective: This study aims to verify whether the Brain-Break intervention in the fourth-grade classroom of primary school can improve students' physical health and academic performance. Methods: According to the principle of no difference in pre-test data, students from two classes of grade four in Fuhai Road Primary School, Fushan district, Yantai city, Shandong province, were selected as experimental subjects, including 50 students in the experimental class (25 males and 25 females) and 50 students in the control class (24 males and 26 females). The content of the experiment was that the students were asked to perform a 4-minute Brain-Berak program designed by the researcher in the second class in the morning and the afternoon, and the intervention lasted for 12 weeks. In addition, the lung capacity, 50-meter run, sitting body forward bend, one-minute jumping rope and one-minute sit-ups stipulated in the national standards for physical fitness of students (revised in 2014) were selected as the indicators of physical health. The scores of Chinese, Mathematics, and English in the unified academic test of the municipal education bureau were selected as the indicators of academic performance. The independent-sample t-test was used to compare and analyze the data of each index between the two classes. The paired-sample t-test was used to compare and analyze the data of each index in the two classes. This paper presents only results with significant differences. Results: in terms of physical health, lung capacity (P=0.002, T= -2.254), one-minute rope skipping (P=0.000, T=3.043), and one-minute sit-ups (P=0.045, T=6.153) were significantly different between the experimental class and the control class. In terms of academic performance, there is a significant difference between the Chinese performance of the experimental class and the control class (P=0.009, T=4.833). Conclusion: Adding Brain-Berak intervention in the classroom can effectively improve the cardiorespiratory endurance (lung capacity), coordination (jumping rope), and abdominal strength (sit-ups) of fourth-grade students. At the same time, it can also effectively improve their Chinese performance. Therefore, it is suggested to promote micro-sports in the classroom of primary schools throughout the country so as to help students improve their physical health and academic performance.

Keywords: academic performance, brain break, fourth grade, physical health

Procedia PDF Downloads 101
2921 Humeral Head and Scapula Detection in Proton Density Weighted Magnetic Resonance Images Using YOLOv8

Authors: Aysun Sezer

Abstract:

Magnetic Resonance Imaging (MRI) is one of the advanced diagnostic tools for evaluating shoulder pathologies. Proton Density (PD)-weighted MRI sequences prove highly effective in detecting edema. However, they are deficient in the anatomical identification of bones due to a trauma-induced decrease in signal-to-noise ratio and blur in the traumatized cortices. Computer-based diagnostic systems require precise segmentation, identification, and localization of anatomical regions in medical imagery. Deep learning-based object detection algorithms exhibit remarkable proficiency in real-time object identification and localization. In this study, the YOLOv8 model was employed to detect humeral head and scapular regions in 665 axial PD-weighted MR images. The YOLOv8 configuration achieved an overall success rate of 99.60% and 89.90% for detecting the humeral head and scapula, respectively, with an intersection over union (IoU) of 0.5. Our findings indicate a significant promise of employing YOLOv8-based detection for the humerus and scapula regions, particularly in the context of PD-weighted images affected by both noise and intensity inhomogeneity.

Keywords: YOLOv8, object detection, humerus, scapula, IRM

Procedia PDF Downloads 66
2920 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 294
2919 Oral Microbiota as a Novel Predictive Biomarker of Response To Immune Checkpoint Inhibitors in Advanced Non-small Cell Lung Cancer Patients

Authors: Francesco Pantano, Marta Fogolari, Michele Iuliani, Sonia Simonetti, Silvia Cavaliere, Marco Russano, Fabrizio Citarella, Bruno Vincenzi, Silvia Angeletti, Giuseppe Tonini

Abstract:

Background: Although immune checkpoint inhibitors (ICIs) have changed the treatment paradigm of non–small cell lung cancer (NSCLC), these drugs fail to elicit durable responses in the majority of NSCLC patients. The gut microbiota, able to regulate immune responsiveness, is emerging as a promising, modifiable target to improve ICIs response rates. Since the oral microbiome has been demonstrated to be the primary source of bacterial microbiota in the lungs, we investigated its composition as a potential predictive biomarker to identify and select patients who could benefit from immunotherapy. Methods: Thirty-five patients with stage IV squamous and non-squamous cell NSCLC eligible for an anti-PD-1/PD-L1 as monotherapy were enrolled. Saliva samples were collected from patients prior to the start of treatment, bacterial DNA was extracted using the QIAamp® DNA Microbiome Kit (QIAGEN) and the 16S rRNA gene was sequenced on a MiSeq sequencing instrument (Illumina). Results: NSCLC patients were dichotomized as “Responders” (partial or complete response) and “Non-Responders” (progressive disease), after 12 weeks of treatment, based on RECIST criteria. A prevalence of the phylum Candidatus Saccharibacteria was found in the 10 responders compared to non-responders (abundance 5% vs 1% respectively; p-value = 1.46 x 10-7; False Discovery Rate (FDR) = 1.02 x 10-6). Moreover, a higher prevalence of Saccharibacteria Genera Incertae Sedis genus (belonging to the Candidatus Saccharibacteria phylum) was observed in "responders" (p-value = 6.01 x 10-7 and FDR = 2.46 x 10-5). Finally, the patients who benefit from immunotherapy showed a significant abundance of TM7 Phylum Sp Oral Clone FR058 strain, member of Saccharibacteria Genera Incertae Sedis genus (p-value = 6.13 x 10-7 and FDR=7.66 x 10-5). Conclusions: These preliminary results showed a significant association between oral microbiota and ICIs response in NSCLC patients. In particular, the higher prevalence of Candidatus Saccharibacteria phylum and TM7 Phylum Sp Oral Clone FR058 strain in responders suggests their potential immunomodulatory role. The study is still ongoing and updated data will be presented at the congress.

Keywords: oral microbiota, immune checkpoint inhibitors, non-small cell lung cancer, predictive biomarker

Procedia PDF Downloads 99