Search results for: airway segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 527

Search results for: airway segmentation

437 Segmentation of Liver Using Random Forest Classifier

Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir

Abstract:

Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.

Keywords: CT images, image validation, random forest, segmentation

Procedia PDF Downloads 313
436 Image Instance Segmentation Using Modified Mask R-CNN

Authors: Avatharam Ganivada, Krishna Shah

Abstract:

The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets.

Keywords: instance segmentation, object detection, convolutional neural networks, deep learning, computer vision

Procedia PDF Downloads 73
435 Comparison of Mcgrath, Pentax, and Macintosh Laryngoscope in Normal and Cervical Immobilized Manikin by Novices

Authors: Jong Yeop Kim, In Kyong Yi, Hyun Jeong Kwak, Sook Young Lee, Sung Yong Park

Abstract:

Background: Several video laryngoscopes (VLs) were used to facilitate tracheal intubation in the normal and potentially difficult airway, especially by novice personnel. The aim of this study was to compare tracheal intubation performance regarding the time to intubation, glottic view, difficulty, and dental click, by a novice using McGrath VL, Pentax Airway Scope (AWS) and Macintosh laryngoscope in normal and cervical immobilized manikin models. Methods: Thirty-five anesthesia nurses without previous intubation experience were recruited. The participants performed endotracheal intubation in a manikin model at two simulated neck positions (normal and fixed neck via cervical immobilization), using three different devices (McGrath VL, Pentax AWS, and Macintosh direct laryngoscope) at three times each. Performance parameters included intubation time, success rate of intubation, Cormack Lehane laryngoscope grading, dental click, and subjective difficulty score. Results: Intubation time and success rate at the first attempt were not significantly different between the 3 groups in normal airway manikin. In the cervical immobilized manikin, the intubation time was shorter (p = 0.012) and the success rate with the first attempt was significantly higher (p < 0.001) when using McGrath VL and Pentax AWS compared with Macintosh laryngoscope. Both VLs showed less difficulty score (p < 0.001) and more Cormack Lehane grade I (p < 0.001). The incidence of dental clicks was higher with McGrath VL than Macintosh laryngoscope in the normal and cervical immobilized airway (p = 0.005, p < 0.001, respectively). Conclusion: McGrath VL and Pentax AWS resulted in shorter intubation time, higher first attempt success rate, compared with Macintosh laryngoscope by a novice intubator in a cervical immobilized manikin model. McGrath VL could be reduced the risk of dental injury compared with Macintosh laryngoscope in this scenario.

Keywords: intubation, manikin, novice, videolaryngoscope

Procedia PDF Downloads 158
434 Nanoparticles Activated Inflammasome Lead to Airway Hyperresponsiveness and Inflammation in a Mouse Model of Asthma

Authors: Pureun-Haneul Lee, Byeong-Gon Kim, Sun-Hye Lee, An-Soo Jang

Abstract:

Background: Nanoparticles may pose adverse health effects due to particulate matter inhalation. Nanoparticle exposure induces cell and tissue damage, causing local and systemic inflammatory responses. The inflammasome is a major regulator of inflammation through its activation of pro-caspase-1, which cleaves pro-interleukin-1β (IL-1β) into its mature form and may signal acute and chronic immune responses to nanoparticles. Objective: The aim of the study was to identify whether nanoparticles exaggerates inflammasome pathway leading to airway inflammation and hyperresponsiveness in an allergic mice model of asthma. Methods: Mice were treated with saline (sham), OVA-sensitized and challenged (OVA), or titanium dioxide nanoparticles. Lung interleukin 1 beta (IL-1β), interleukin 18 (IL-18), NACHT, LRR and PYD domains-containing protein 3 (NLRP3) and caspase-1 levels were assessed with Western Blot. Caspase-1 was checked by immunohistochemical staining. Reactive oxygen species were measured for the marker 8-isoprostane and carbonyl by ELISA. Results: Airway inflammation and hyperresponsiveness increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. TiO2 nanoparticles treatment increased IL-1β and IL-18 protein expression in OVA-sensitized/challenged mice. TiO2 nanoparticles augmented the expression of NLRP3 and caspase-1 leading to the formation of an active caspase-1 in the lung. Lung caspase-1 expression was increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. Reactive oxygen species was increased in OVA-sensitized/challenged mice and in OVA-sensitized/challenged plus TiO2 exposed mice. Conclusion: Our data demonstrate that inflammasome pathway activates in asthmatic lungs following nanoparticles exposure, suggesting that targeting the inflammasome may help control nanoparticles-induced airway inflammation and responsiveness.

Keywords: bronchial asthma, inflammation, inflammasome, nanoparticles

Procedia PDF Downloads 375
433 A Modified Shannon Entropy Measure for Improved Image Segmentation

Authors: Mohammad A. U. Khan, Omar A. Kittaneh, M. Akbar, Tariq M. Khan, Husam A. Bayoud

Abstract:

The Shannon Entropy measure has been widely used for measuring uncertainty. However, in partial settings, the histogram is used to estimate the underlying distribution. The histogram is dependent on the number of bins used. In this paper, a modification is proposed that makes the Shannon entropy based on histogram consistent. For providing the benefits, two application are picked in medical image processing applications. The simulations are carried out to show the superiority of this modified measure for image segmentation problem. The improvement may be contributed to robustness shown to uneven background in images.

Keywords: Shannon entropy, medical image processing, image segmentation, modification

Procedia PDF Downloads 497
432 Automated Digital Mammogram Segmentation Using Dispersed Region Growing and Pectoral Muscle Sliding Window Algorithm

Authors: Ayush Shrivastava, Arpit Chaudhary, Devang Kulshreshtha, Vibhav Prakash Singh, Rajeev Srivastava

Abstract:

Early diagnosis of breast cancer can improve the survival rate by detecting cancer at an early stage. Breast region segmentation is an essential step in the analysis of digital mammograms. Accurate image segmentation leads to better detection of cancer. It aims at separating out Region of Interest (ROI) from rest of the image. The procedure begins with removal of labels, annotations and tags from the mammographic image using morphological opening method. Pectoral Muscle Sliding Window Algorithm (PMSWA) is used for removal of pectoral muscle from mammograms which is necessary as the intensity values of pectoral muscles are similar to that of ROI which makes it difficult to separate out. After removing the pectoral muscle, Dispersed Region Growing Algorithm (DRGA) is used for segmentation of mammogram which disperses seeds in different regions instead of a single bright region. To demonstrate the validity of our segmentation method, 322 mammographic images from Mammographic Image Analysis Society (MIAS) database are used. The dataset contains medio-lateral oblique (MLO) view of mammograms. Experimental results on MIAS dataset show the effectiveness of our proposed method.

Keywords: CAD, dispersed region growing algorithm (DRGA), image segmentation, mammography, pectoral muscle sliding window algorithm (PMSWA)

Procedia PDF Downloads 312
431 Change Detection Method Based on Scale-Invariant Feature Transformation Keypoints and Segmentation for Synthetic Aperture Radar Image

Authors: Lan Du, Yan Wang, Hui Dai

Abstract:

Synthetic aperture radar (SAR) image change detection has recently become a challenging problem owing to the existence of speckle noises. In this paper, an unsupervised distribution-free change detection for SAR image based on scale-invariant feature transform (SIFT) keypoints and segmentation is proposed. Firstly, the noise-robust SIFT keypoints which reveal the blob-like structures in an image are extracted in the log-ratio image to reduce the detection range. Then, different from the traditional change detection which directly obtains the change-detection map from the difference image, segmentation is made around the extracted keypoints in the two original multitemporal SAR images to obtain accurate changed region. At last, the change-detection map is generated by comparing the two segmentations. Experimental results on the real SAR image dataset demonstrate the effectiveness of the proposed method.

Keywords: change detection, Synthetic Aperture Radar (SAR), Scale-Invariant Feature Transformation (SIFT), segmentation

Procedia PDF Downloads 386
430 Anatomically-Based Oropharyngeal Rehabilitation for the Patients with Obstructive Sleep Apnea Using a Multilevel Approach

Authors: Hsin-Yu Lin, Ching-Hsia Hung

Abstract:

Obstructive sleep apnea (OSA) is characterized by a complete or partial obstruction of the upper airway during sleep. The vulnerable sites of upper airway collapses are consequences of sleep state-dependent reductions in tone in specific pharyngeal dilators. Clinical examinations reveal multilevel collapses of the upper airway among the patients with OSA. Therefore, an anatomically-based oropharyngeal rehabilitation should comprise a multilevel approach, including retropalatal, retroglossal, hypopharyngeal, temporomandibular, and facial levels, all of which involve different muscle groups and contribute to multifunctional interaction and coordination, such as swallowing, breathing, and phonation. The purpose of the study was to exam the effects of this rehabilitation program with a multilevel approach. In this study, fifteen subjects with newly diagnosed moderate or severe OSA (Apnea-Hypopnea-Index≥15) were randomized into an intervention group and control group. The intervention group (N=8) underwent a 12-week-intervention of a hospital-based rehabilitation program, while the control group (N=7) was kept on the waiting list. The 12-week-intervention comprised an anatomically based multilevel approach. The primary outcome was Polysomnography (PSG) data, and the secondary outcome was oropharyngeal and respiratory muscle function. In the intervention group, Apnea-Hypopnea-Index significantly improved (46.96±19.45 versus 32.78±10.78 events/h, p=0.017) compared with control group (35.77±17.49 versus 42.96±17.32 events/h, p=0.043). While the control group remained no change, the intervention group demonstrated other PSG outcomes significantly improvement, including arousal index (46.04±18.9 versus 32.98±8.35/h, p=0.035), mean SpO2 (92.88±2.1 versus 94.13±1.46%, p=0.039). Besides, the intervention group demonstrated significant improvement in oropharyngeal and respiratory muscle function compared to the control group. This anatomically-based oropharyngeal rehabilitation with a multilevel approach can be proven as a non-invasive therapy for patients with OSA.

Keywords: obstructive sleep apnea, upper airway, oropharyngeal rehabilitation, multilevel approach

Procedia PDF Downloads 102
429 Automatic Segmentation of Lung Pleura Based On Curvature Analysis

Authors: Sasidhar B., Bhaskar Rao N., Ramesh Babu D. R., Ravi Shankar M.

Abstract:

Segmentation of lung pleura is a preprocessing step in Computer-Aided Diagnosis (CAD) which helps in reducing false positives in detection of lung cancer. The existing methods fail in extraction of lung regions with the nodules at the pleura of the lungs. In this paper, a new method is proposed which segments lung regions with nodules at the pleura of the lungs based on curvature analysis and morphological operators. The proposed algorithm is tested on 06 patient’s dataset which consists of 60 images of Lung Image Database Consortium (LIDC) and the results are found to be satisfactory with 98.3% average overlap measure (AΩ).

Keywords: curvature analysis, image segmentation, morphological operators, thresholding

Procedia PDF Downloads 596
428 Proof of Concept of Video Laryngoscopy Intubation: Potential Utility in the Pre-Hospital Environment by Emergency Medical Technicians

Authors: A. Al Hajeri, M. E. Minton, B. Haskins, F. H. Cummins

Abstract:

The pre-hospital endotracheal intubation is fraught with difficulties; one solution offered has been video laryngoscopy (VL) which permits better visualization of the glottis than the standard method of direct laryngoscopy (DL). This method has resulted in a higher first attempt success rate and fewer failed intubations. However, VL has mainly been evaluated by experienced providers (experienced anesthetists), and as such the utility of this device for those whom infrequently intubate has not been thoroughly assessed. We sought to evaluate this equipment to determine whether in the hands of novice providers this equipment could prove an effective airway management adjunct. DL and two VL methods (C-Mac with distal screen/C-Mac with attached screen) were evaluated by simulating practice on a Laerdal airway management trainer manikin. Twenty Emergency Medical Technicians (basics) were recruited as novice practitioners. This group was used to eliminate bias, as these clinicians had no pre-hospital experience of intubation (although they did have basic airway skills). The following areas were assessed: Time taken to intubate, number of attempts required to successfully intubate, ease of use of equipment VL (attached screen) took on average longer for novice clinicians to successfully intubate and had a lower success rate and reported higher rating of difficulty compared to DL. However, VL (with distal screen) and DL were comparable on intubation times, success rate, gastric inflation rate and rating of difficulty by the user. This study highlights the routine use of VL by inexperienced clinicians would be of no added benefit over DL. Further studies are required to determine whether Emergency Medical Technicians (Paramedics) would benefit from this airway adjunct, and ascertain whether after initial mastery of VL (with a distal screen), lower intubation times and difficulty rating may be achievable.

Keywords: direct laryngoscopy, endotracheal intubation, pre-hospital, video laryngoscopy

Procedia PDF Downloads 410
427 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 261
426 Grain Boundary Detection Based on Superpixel Merges

Authors: Gaokai Liu

Abstract:

The distribution of material grain sizes reflects the strength, fracture, corrosion and other properties, and the grain size can be acquired via the grain boundary. In recent years, the automatic grain boundary detection is widely required instead of complex experimental operations. In this paper, an effective solution is applied to acquire the grain boundary of material images. First, the initial superpixel segmentation result is obtained via a superpixel approach. Then, a region merging method is employed to merge adjacent regions based on certain similarity criterions, the experimental results show that the merging strategy improves the superpixel segmentation result on material datasets.

Keywords: grain boundary detection, image segmentation, material images, region merging

Procedia PDF Downloads 170
425 Combining an Optimized Closed Principal Curve-Based Method and Evolutionary Neural Network for Ultrasound Prostate Segmentation

Authors: Tao Peng, Jing Zhao, Yanqing Xu, Jing Cai

Abstract:

Due to missing/ambiguous boundaries between the prostate and neighboring structures, the presence of shadow artifacts, as well as the large variability in prostate shapes, ultrasound prostate segmentation is challenging. To handle these issues, this paper develops a hybrid method for ultrasound prostate segmentation by combining an optimized closed principal curve-based method and the evolutionary neural network; the former can fit curves with great curvature and generate a contour composed of line segments connected by sorted vertices, and the latter is used to express an appropriate map function (represented by parameters of evolutionary neural network) for generating the smooth prostate contour to match the ground truth contour. Both qualitative and quantitative experimental results showed that our proposed method obtains accurate and robust performances.

Keywords: ultrasound prostate segmentation, optimized closed polygonal segment method, evolutionary neural network, smooth mathematical model, principal curve

Procedia PDF Downloads 202
424 New Segmentation of Piecewise Moving-Average Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

This paper addresses the problem of the signal segmentation within a Bayesian framework by using reversible jump MCMC algorithm. The signal is modelled by piecewise constant Moving-Average (MA) model where the numbers of segments, the position of change-point, the order and the coefficient of the MA model for each segment are unknown. The reversible jump MCMC algorithm is then used to generate samples distributed according to the joint posterior distribution of the unknown parameters. These samples allow calculating some interesting features of the posterior distribution. The performance of the methodology is illustrated via several simulation results.

Keywords: piecewise, moving-average model, reversible jump MCMC, signal segmentation

Procedia PDF Downloads 227
423 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method

Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park

Abstract:

3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.

Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)

Procedia PDF Downloads 234
422 On Musical Information Geometry with Applications to Sonified Image Analysis

Authors: Shannon Steinmetz, Ellen Gethner

Abstract:

In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content.

Keywords: sonification, musical information geometry, image, content extraction, automated quantification, audio segmentation, pattern recognition

Procedia PDF Downloads 237
421 Pentax Airway Scope Video Laryngoscope for Orotracheal Intubation in Children: A Randomized Controlled Trial

Authors: In Kyong Yi, Yun Jeong Chae, Jihoon Hwang, Sook-Young Lee, Jong-Yeop Kim

Abstract:

Background: Pentax airway scope (AWS) is a recently developed video laryngoscope for use in both normal and difficult airways, providing a good laryngeal view. The purpose of this randomized noninferior study was to evaluate the efficacy of the Pentax-AWS regarding intubation time, laryngeal view and ease of intubation in pediatric patients with normal airway, compared to Macintosh laryngoscope. Method: A total of 136 pediatric patients aged 1 to 10 with American Society of Anesthesiologists physical status I or II undergoing general anesthesia required orotracheal intubation were randomly allocated into two groups: Macintosh laryngoscope (n =68) and Pentax AWS (n=68). Anesthesia was induced with propofol, rocuronium, and sevoflurane. The primary outcome was intubation time. Cormack-Lehane laryngeal view grade, application of optimal laryngeal external manipulation (OELM), intubation difficulty scale (IDS), intubation failure rate and adverse events were also measured. Result: No significant difference was observed between the two groups regarding intubation time (Macintosh; 23[22-26] sec vs. Pentax; 23.5[22-27.75] sec, p=0.713). As for the laryngeal view grade, the Pentax group showed less number of grade 2a or higher grade cases compared to the Macintosh group (1/2a/2b/3; 52.9%/41.2%/4.4%/1.5% vs. 98.5%/1.5%/0%/0%, p=0.000). No optimal laryngeal external manipulation application was required in the Pentax group (38.2% vs. 0%, p=0.000). Intubation difficulty scale resulted in lower values for Pentax group (0 [0-2] vs. 0 [0-0.55], p=0.001). Failure rate was not different between the two groups (1.5% vs. 4.4%, p=0.619). Adverse event-wise, slightly higher incidence of bleeding (1.5% vs. 5.9%, p=0.172) and teeth injury (0% vs. 5.9%, p=0.042) occurred in the Pentax group. Conclusion: In conclusion, Pentax-AWS provided better laryngeal view, similar intubation time and similar success rate compared with Macintosh laryngoscope in children with normal airway. However, the risk of teeth injury might increase and warrant special attention.

Keywords: Pentax-AWS, pediatric, video laryngoscope, intubation

Procedia PDF Downloads 202
420 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 169
419 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method

Procedia PDF Downloads 313
418 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.

Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition

Procedia PDF Downloads 24
417 Reduction of Speckle Noise in Echocardiographic Images: A Survey

Authors: Fathi Kallel, Saida Khachira, Mohamed Ben Slima, Ahmed Ben Hamida

Abstract:

Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures.

Keywords: medical image processing, ultrasound images, Speckle noise, image enhancement, speckle filtering, segmentation, snakes

Procedia PDF Downloads 530
416 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 397
415 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features

Authors: Kyi Pyar Zaw, Zin Mar Kyu

Abstract:

Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.

Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation

Procedia PDF Downloads 320
414 Automatic Moment-Based Texture Segmentation

Authors: Tudor Barbu

Abstract:

An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.

Keywords: image segmentation, moment-based, texture analysis, automatic classification, validation indexes

Procedia PDF Downloads 416
413 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 359
412 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 83
411 Calcium Release- Activated Calcium Channels as a Target in Treatment of Allergic Asthma

Authors: Martina Šutovská, Marta Jošková, Ivana Kazimierová, Lenka Pappová, Maroš Adamkov, Soňa Fraňová

Abstract:

Bronchial asthma is characterized by increased bronchoconstrictor responses to provoking agonists, airway inflammation and remodeling. All these processes involve Ca2+ influx through Ca2+-release-activated Ca2+ channels (CRAC) that are widely expressed in immune, respiratory epithelium and airway smooth muscle (ASM) cells. Our previous study pointed on possible therapeutic potency of CRAC blockers using experimental guinea pigs asthma model. Presented work analyzed complex anti-asthmatic effect of long-term administered CRAC blocker, including impact on allergic inflammation, airways hyperreactivity, and remodeling and mucociliary clearance. Ovalbumin-induced allergic inflammation of the airways according to Franova et al. was followed by 14 days lasted administration of CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA) in the dose 1.5 mg/kg bw. For comparative purposes salbutamol, budesonide and saline were applied to control groups. The anti-inflammatory effect of FPCA was estimated by serum and bronchoalveolar lavage fluid (BALF) changes in IL-4, IL-5, IL-13 and TNF-α analyzed by Bio-Plex® assay as well as immunohistochemical staining focused on assessment of tryptase and c-Fos positivity in pulmonary samples. The in vivo airway hyperreactivity was evaluated by Pennock et al. and by organ tissue bath methods in vitro. The immunohistochemical changes in ASM actin and collagen III layer as well as mucin secretion evaluated anti-remodeling effect of FPCA. The measurement of ciliary beat frequency (CBF) in vitro using LabVIEW™ Software determined impact on mucociliary clearance. Long-term administration of FPCA to sensitized animals resulted in: i. Significant decrease in cytokine levels, tryptase and c-Fos positivity similar to budesonide effect; ii.Meaningful decrease in basal and bronchoconstrictors-induced in vivo and in vitro airway hyperreactivity comparable to salbutamol; iii. Significant inhibition of airway remodeling parameters; iv. Insignificant changes in CBF. All these findings confirmed complex anti-asthmatic effect of CRAC channels blocker and evidenced these structures as the rational target in the treatment of allergic bronchial asthma.

Keywords: allergic asthma, CRAC channels, cytokines, respiratory epithelium

Procedia PDF Downloads 521
410 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 79
409 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 131
408 Data-Driven Market Segmentation in Hospitality Using Unsupervised Machine Learning

Authors: Rik van Leeuwen, Ger Koole

Abstract:

Within hospitality, marketing departments use segmentation to create tailored strategies to ensure personalized marketing. This study provides a data-driven approach by segmenting guest profiles via hierarchical clustering based on an extensive set of features. The industry requires understandable outcomes that contribute to adaptability for marketing departments to make data-driven decisions and ultimately driving profit. A marketing department specified a business question that guides the unsupervised machine learning algorithm. Features of guests change over time; therefore, there is a probability that guests transition from one segment to another. The purpose of the study is to provide steps in the process from raw data to actionable insights, which serve as a guideline for how hospitality companies can adopt an algorithmic approach.

Keywords: hierarchical cluster analysis, hospitality, market segmentation

Procedia PDF Downloads 108