Search results for: online lexical segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3151

Search results for: online lexical segmentation

3121 Abdominal Organ Segmentation in CT Images Based On Watershed Transform and Mosaic Image

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Accurate Liver, spleen and kidneys segmentation in abdominal CT images is one of the most important steps for computer aided abdominal organs pathology diagnosis. In this paper, we have proposed a new semi-automatic algorithm for Liver, spleen and kidneys area extraction in abdominal CT images. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. The algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, multi-abdominal organ segmentation, mosaic image, the watershed algorithm

Procedia PDF Downloads 464
3120 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 55
3119 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus

Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti

Abstract:

Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.

Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel

Procedia PDF Downloads 168
3118 Effect of the Keyword Strategy on Lexical Semantic Acquisition: Recognition, Retention and Comprehension in an English as Second Language Context

Authors: Fatima Muhammad Shitu

Abstract:

This study seeks to investigate the effect of the keyword strategy on lexico–semantic acquisition, recognition, retention and comprehension in an ESL context. The aim of the study is to determine whether the keyword strategy can be used to enhance acquisition. As a quasi- experimental research, the objectives of the study include: To determine the extent to which the scores obtained by the subjects, who were trained on the use of the keyword strategy for acquisition, differ at the pre-tests and the post–tests and also to find out the relationship in the scores obtained at these tests levels. The sample for the study consists of 300 hundred undergraduate ESL Students in the Federal College of Education, Kano. The seventy-five lexical items for acquisition belong to the lexical field category known as register, and they include Medical, Agriculture and Photography registers (MAP). These were divided in the ratio twenty-five (25) lexical items in each lexical field. The testing technique was used to collect the data while the descriptive and inferential statistics were employed for data analysis. For the purpose of testing, the two kinds of tests administered at each test level include the WARRT (Word Acquisition, Recognition, and Retention Test) and the CCPT (Cloze Comprehension Passage Test). The results of the study revealed that there are significant differences in the scores obtained between the pre-tests, and the post–tests and there are no correlations in the scores obtained as well. This implies that the keyword strategy has effectively enhanced the acquisition of the lexical items studied.

Keywords: keyword, lexical, semantics, strategy

Procedia PDF Downloads 284
3117 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 364
3116 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 285
3115 The Phonology and Phonetics of Second Language Intonation in Case of “Downstep”

Authors: Tayebeh Norouzi

Abstract:

This study aims to investigate the acquisition process of intonation. It examines the intonation structure of Tokyo Japanese and its realization by Iranian learners of Japanese. Seven Iranian learners of Japanese, differing in fluency, and two Japanese speakers participated in the experiment. Two sentences were used to test the phonological and phonetic characteristics of lexical pitch-accent as well as the intonation patterns produced by the speakers. Both sentences consisted of similar words with the same number of syllables and lexical pitch-accents but different syntactic structure. Speakers were asked to read each sentence three times at normal speed, and the data were analyzed by Praat. The results show that lexical pitch-accent, Accentual Phrase (AP) and AP boundary tone realization vary depending on sentence type. For sentences of type XdeYwo, the lexical pitch-accent is realized properly. However, there is a rise in AP boundary tone regardless of speakers’ level of fluency. In contrast, in sentences of type XnoYwo, the lexical pitch-accent and AP boundary tone vary depending on the speakers’ fluency level. Advanced speakers are better at grouping words into phrases and produce more native-like intonation patterns, though they are not able to realize downstep properly. The non-native speakers tried to realize proper intonation patterns by making changes in lexical accent and boundary tone.

Keywords: intonation, Iranian learners, Japanese prosody, lexical accent, second language acquisition.

Procedia PDF Downloads 127
3114 A Neural Approach for Color-Textured Images Segmentation

Authors: Khalid Salhi, El Miloud Jaara, Mohammed Talibi Alaoui

Abstract:

In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method.

Keywords: segmentation, color-texture, neural networks, fractal, watershed

Procedia PDF Downloads 312
3113 'Caucasian Mountaineer / Scottish Highlander': Correlation between Semantics and Culture

Authors: Natalia M. Nepomniashchikh

Abstract:

The research focuses on Russian and English linguoculturemes Caucasian mountaineer and Scottish Highlander, the effort of comparative-contrastive analysis was made. In order to reach the aim, the analysis of the vocabulary definitions of the concepts under consideration was taken, which made it possible to build the lexical-semantic fields of both lexical items in Russian and English. This stage of research helped to turn to the linguistic-cultural fields construction. To build these fields, literary pieces containing the concepts under consideration and the items directly related to them were taken from the works about the Caucasus mountains and mountaineers living there by M. Yu. Lermontov and the ones by W. Scott devoted to the Scottish Highlands and their inhabitants. All collected data was systematized in schemes and tables reflecting the differences and intercrossing areas.

Keywords: lexemes, lexical items, lexical-semantic field, linguistic-cultural field, linguoculturemes

Procedia PDF Downloads 207
3112 Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches

Authors: Gaokai Liu

Abstract:

Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets.

Keywords: deep learning, defect detection, image segmentation, nanomaterials

Procedia PDF Downloads 115
3111 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images

Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat

Abstract:

The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.

Keywords: image segmentation, clustering, GUI, 2D MRI

Procedia PDF Downloads 348
3110 Extracting Actions with Improved Part of Speech Tagging for Social Networking Texts

Authors: Yassine Jamoussi, Ameni Youssfi, Henda Ben Ghezala

Abstract:

With the growing interest in social networking, the interaction of social actors evolved to a source of knowledge in which it becomes possible to perform context aware-reasoning. The information extraction from social networking especially Twitter and Facebook is one of the problems in this area. To extract text from social networking, we need several lexical features and large scale word clustering. We attempt to expand existing tokenizer and to develop our own tagger in order to support the incorrect words currently in existence in Facebook and Twitter. Our goal in this work is to benefit from the lexical features developed for Twitter and online conversational text in previous works, and to develop an extraction model for constructing a huge knowledge based on actions

Keywords: social networking, information extraction, part-of-speech tagging, natural language processing

Procedia PDF Downloads 275
3109 Meta Mask Correction for Nuclei Segmentation in Histopathological Image

Authors: Jiangbo Shi, Zeyu Gao, Chen Li

Abstract:

Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.

Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations

Procedia PDF Downloads 114
3108 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 463
3107 Contrastive Learning for Unsupervised Object Segmentation in Sequential Images

Authors: Tian Zhang

Abstract:

Unsupervised object segmentation aims at segmenting objects in sequential images and obtaining the mask of each object without any manual intervention. Unsupervised segmentation remains a challenging task due to the lack of prior knowledge about these objects. Previous methods often require manually specifying the action of each object, which is often difficult to obtain. Instead, this paper does not need action information of objects and automatically learns the actions and relations among objects from the structured environment. To obtain the object segmentation of sequential images, the relationships between objects and images are extracted to infer the action and interaction of objects based on the multi-head attention mechanism. Three types of objects’ relationships in the object segmentation task are proposed: the relationship between objects in the same frame, the relationship between objects in two frames, and the relationship between objects and historical information. Based on these relationships, the proposed model (1) is effective in multiple objects segmentation tasks, (2) just needs images as input, and (3) produces better segmentation results as more relationships are considered. The experimental results on multiple datasets show that this paper’s method achieves state-of-art performance. The quantitative and qualitative analyses of the result are conducted. The proposed method could be easily extended to other similar applications.

Keywords: unsupervised object segmentation, attention mechanism, contrastive learning, structured environment

Procedia PDF Downloads 84
3106 The Influence of Noise on Aerial Image Semantic Segmentation

Authors: Pengchao Wei, Xiangzhong Fang

Abstract:

Noise is ubiquitous in this world. Denoising is an essential technology, especially in image semantic segmentation, where noises are generally categorized into two main types i.e. feature noise and label noise. The main focus of this paper is aiming at modeling label noise, investigating the behaviors of different types of label noise on image semantic segmentation tasks using K-Nearest-Neighbor and Convolutional Neural Network classifier. The performance without label noise and with is evaluated and illustrated in this paper. In addition to that, the influence of feature noise on the image semantic segmentation task is researched as well and a feature noise reduction method is applied to mitigate its influence in the learning procedure.

Keywords: convolutional neural network, denoising, feature noise, image semantic segmentation, k-nearest-neighbor, label noise

Procedia PDF Downloads 187
3105 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images

Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy

Abstract:

Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.

Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms

Procedia PDF Downloads 353
3104 Maximum Entropy Based Image Segmentation of Human Skin Lesion

Authors: Sheema Shuja Khattak, Gule Saman, Imran Khan, Abdus Salam

Abstract:

Image segmentation plays an important role in medical imaging applications. Therefore, accurate methods are needed for the successful segmentation of medical images for diagnosis and detection of various diseases. In this paper, we have used maximum entropy to achieve image segmentation. Maximum entropy has been calculated using Shannon, Renyi, and Tsallis entropies. This work has novelty based on the detection of skin lesion caused by the bite of a parasite called Sand Fly causing the disease is called Cutaneous Leishmaniasis.

Keywords: shannon, maximum entropy, Renyi, Tsallis entropy

Procedia PDF Downloads 428
3103 Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images

Authors: Jie Huo, Jonathan Wu

Abstract:

Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.

Keywords: brain MRI segmentation, dynamic energy model, multi-atlas segmentation, energy minimization

Procedia PDF Downloads 308
3102 An Analysis of Language Borrowing among Algerian University Students Using Online Facebook Conversations

Authors: Messaouda Annab

Abstract:

The rapid development of technology has led to an important context in which different languages and structures are used in the same conversations. This paper investigates the practice of language borrowing within social media platform, namely, Facebook among Algerian Vernacular Arabic (AVA) students. In other words, this study will explore how Algerian students have incorporated lexical English borrowing in their online conversations. This paper will examine the relationships between language, culture and identity among a multilingual group. The main objective is to determine the cultural and linguistic functions that borrowing fulfills in social media and to explain the possible factors underlying English borrowing. The nature of the study entails the use of an online research method that includes ten online Facebook conversations in the form of private messages collected from Bachelor and Masters Algerian students recruited from the English department at the University of Oum El-Bouaghi. The analysis of data revealed that social media platform provided the users with opportunities to shift from one language to another. This practice was noticed in students’ online conversations. English borrowing was the most relevant language performance in accordance with Arabic which is the mother tongue of the chosen sample. The analysis has assumed that participants are skilled in more than one language.

Keywords: borrowing, language performance, linguistic background, social media

Procedia PDF Downloads 128
3101 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa

Abstract:

To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality

Procedia PDF Downloads 91
3100 Lexical Classification of Compounds in Berom: A Semantic Description of N-V Nominal Compounds

Authors: Pam Bitrus Marcus

Abstract:

Compounds in Berom, a Niger-Congo language that is spoken in parts of central Nigeria, have been understudied, and the semantics of N-V nominal compounds have not been sufficiently delineated. This study describes the lexical classification of compounds in Berom and, specifically, examines the semantics of nominal compounds with N-V constituents. The study relied on a data set of 200 compounds that were drawn from Bere Naha (a newsletter publication in Berom). Contrary to the nominalization process in defining the lexical class of compounds in languages, the study revealed that verbal and adjectival classes of compounds are also attested in Berom and N-V nominal compounds have an agentive or locative interpretation that is not solely determined by the meaning of the constituents of the compound but by the context of the usage.

Keywords: berom, berom compounds, nominal compound, N-V compounds

Procedia PDF Downloads 40
3099 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm

Procedia PDF Downloads 413
3098 Endocardial Ultrasound Segmentation using Level Set method

Authors: Daoudi Abdelaziz, Mahmoudi Saïd, Chikh Mohamed Amine

Abstract:

This paper presents a fully automatic segmentation method of the left ventricle at End Systolic (ES) and End Diastolic (ED) in the ultrasound images by means of an implicit deformable model (level set) based on Geodesic Active Contour model. A pre-processing Gaussian smoothing stage is applied to the image, which is essential for a good segmentation. Before the segmentation phase, we locate automatically the area of the left ventricle by using a detection approach based on the Hough Transform method. Consequently, the result obtained is used to automate the initialization of the level set model. This initial curve (zero level set) deforms to search the Endocardial border in the image. On the other hand, quantitative evaluation was performed on a data set composed of 15 subjects with a comparison to ground truth (manual segmentation).

Keywords: level set method, transform Hough, Gaussian smoothing, left ventricle, ultrasound images.

Procedia PDF Downloads 436
3097 Heterogenous Dimensional Super Resolution of 3D CT Scans Using Transformers

Authors: Helen Zhang

Abstract:

Accurate segmentation of the airways from CT scans is crucial for early diagnosis of lung cancer. However, the existing airway segmentation algorithms often rely on thin-slice CT scans, which can be inconvenient and costly. This paper presents a set of machine learning-based 3D super-resolution algorithms along heterogeneous dimensions to improve the resolution of thicker CT scans to reduce the reliance on thin-slice scans. To evaluate the efficacy of the super-resolution algorithms, quantitative assessments using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural SIMilarity index) were performed. The impact of super-resolution on airway segmentation accuracy is also studied. The proposed approach has the potential to make airway segmentation more accessible and affordable, thereby facilitating early diagnosis and treatment of lung cancer.

Keywords: 3D super-resolution, airway segmentation, thin-slice CT scans, machine learning

Procedia PDF Downloads 80
3096 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 68
3095 Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation

Authors: Abdolvahab Ehsani Rad, Mohd Shafry Mohd Rahim, Alireza Norouzi

Abstract:

Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.

Keywords: integral production, level set method, morphological operation, segmentation

Procedia PDF Downloads 284
3094 A Stylistic Analysis of the Short Story ‘The Escape’ by Qaisra Shahraz

Authors: Huma Javed

Abstract:

Stylistics is a broad term that is concerned with both literature and linguistics, due to which the significance of the stylistics increases. This research aims to analyze Qaisra Shahraz's short story ‘The Escape’ from the stylistic analysis viewpoint. The focus of this study is on three aspects grammar category, lexical category, and figure of speech of the short story. The research designs for this article are both explorative and descriptive. The analysis of the data shows that the writer has used more nouns in the story as compared to other lexical items, which suggests that story has a descriptive style rather than narrative.

Keywords: The Escape, stylistics, grammatical category, lexical category, figure of speech

Procedia PDF Downloads 191
3093 Atwood's Canadianisms and Neologisms: A Cognitive Approach to Literature

Authors: Eleonora Sasso

Abstract:

This paper takes as its starting point the notions of cognitive linguistics and lexical blending, and uses both these theoretical concepts to advance a new reading of Margaret Atwood’s latest writings, one which sees them as paramount literary examples of norm and usage in bilingual Canadian lexicography. Atwood’s prose seems to be imbued with Canadianisms and neologisms, lexical blends of zoomorphic forms, a kind of meeting-point between two conceptual structures which follow the principles of lexical economy and asyntactic relation. Atwood’s neologisms also attest to the undeniable impact on language exerted by Canada’s aboriginal peoples. This paper aims to track through these references and with the aid of the Eskimo-English dictionary look at the linguistic issues – attitudes to contaminations and hybridisations, questions of lexical blending in literary examples, etc – which they raise. Atwood’s fiction, whose cognitive linguistic strategy employs ‘the virtues of scissors and matches’, always strives to achieve isomorphism between word form and concept.

Keywords: Atwood, Canadianisms, cognitive science, Eskimo/English dictionary

Procedia PDF Downloads 235
3092 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, opinion detection, SentiWordNet, trust score

Procedia PDF Downloads 165