Search results for: word segmentation
1101 Grain Boundary Detection Based on Superpixel Merges
Authors: Gaokai Liu
Abstract:
The distribution of material grain sizes reflects the strength, fracture, corrosion and other properties, and the grain size can be acquired via the grain boundary. In recent years, the automatic grain boundary detection is widely required instead of complex experimental operations. In this paper, an effective solution is applied to acquire the grain boundary of material images. First, the initial superpixel segmentation result is obtained via a superpixel approach. Then, a region merging method is employed to merge adjacent regions based on certain similarity criterions, the experimental results show that the merging strategy improves the superpixel segmentation result on material datasets.Keywords: grain boundary detection, image segmentation, material images, region merging
Procedia PDF Downloads 1691100 Transcription Skills and Written Composition in Chinese
Authors: Pui-sze Yeung, Connie Suk-han Ho, David Wai-ock Chan, Kevin Kien-hoa Chung
Abstract:
Background: Recent findings have shown that transcription skills play a unique and significant role in Chinese word reading and spelling (i.e. word dictation), and written composition development. The interrelationships among component skills of transcription, word reading, word spelling, and written composition in Chinese have rarely been examined in the literature. Is the contribution of component skills of transcription to Chinese written composition mediated by word level skills (i.e., word reading and spelling)? Methods: The participants in the study were 249 Chinese children in Grade 1, Grade 3, and Grade 5 in Hong Kong. They were administered measures of general reasoning ability, orthographic knowledge, stroke sequence knowledge, word spelling, handwriting fluency, word reading, and Chinese narrative writing. Orthographic knowledge- orthographic knowledge was assessed by a task modeled after the lexical decision subtest of the Hong Kong Test of Specific Learning Difficulties in Reading and Writing (HKT-SpLD). Stroke sequence knowledge: The participants’ performance in producing legitimate stroke sequences was measured by a stroke sequence knowledge task. Handwriting fluency- Handwriting fluency was assessed by a task modeled after the Chinese Handwriting Speed Test. Word spelling: The stimuli of the word spelling task consist of fourteen two-character Chinese words. Word reading: The stimuli of the word reading task consist of 120 two-character Chinese words. Written composition: A narrative writing task was used to assess the participants’ text writing skills. Results: Analysis of covariance results showed that there were significant between-grade differences in the performance of word reading, word spelling, handwriting fluency, and written composition. Preliminary hierarchical multiple regression analysis results showed that orthographic knowledge, word spelling, and handwriting fluency were unique predictors of Chinese written composition even after controlling for age, IQ, and word reading. The interaction effects between grade and each of these three skills (orthographic knowledge, word spelling, and handwriting fluency) were not significant. Path analysis results showed that orthographic knowledge contributed to written composition both directly and indirectly through word spelling, while handwriting fluency contributed to written composition directly and indirectly through both word reading and spelling. Stroke sequence knowledge only contributed to written composition indirectly through word spelling. Conclusions: Preliminary hierarchical regression results were consistent with previous findings about the significant role of transcription skills in Chinese word reading, spelling and written composition development. The fact that orthographic knowledge contributed both directly and indirectly to written composition through word reading and spelling may reflect the impact of the script-sound-meaning convergence of Chinese characters on the composing process. The significant contribution of word spelling and handwriting fluency to Chinese written composition across elementary grades highlighted the difficulty in attaining automaticity of transcription skills in Chinese, which limits the working memory resources available for other composing processes.Keywords: orthographic knowledge, transcription skills, word reading, writing
Procedia PDF Downloads 4241099 The Greek Root Word ‘Kos’ and the Trade of Ancient Greek with Tamil Nadu, India
Authors: D. Pugazhendhi
Abstract:
The ancient Greeks were forerunners in many fields than other societies. So, the Greeks were well connected with all the countries which were well developed during that time through trade route. In this connection, trading of goods from the ancient Greece to Tamil Nadu which is presently in India, though they are geographically far away, played an important role. In that way, the word and the goods related with kos and kare got exchanged between these two societies. So, it is necessary to compare the phonology and the morphological occurrences of these words that are found common both in the ancient Greek and Tamil literatures of the contemporary period. The results show that there were many words derived from the root kos with the basic meaning of ‘arrange’ in the ancient Greek language, but this is not the case in the usage of the word kare. In the ancient Tamil literature, the word ‘kos’ does not have any root and also had rare occurrences. But it was just the opposite in the case of the word ‘kare’. One of all the meanings of the word, which was derived from the root ‘kos’ in ancient Greek literature, is related with costly ornaments. This meaning seems to have close resemblance with the usage of word ‘kos’ in ancient Tamil literature. Also, the meaning of the word ‘kare’ in ancient Tamil literature is related with spices whereas, in the ancient Greek literature, its meaning is related to that of the cooking of meat using spices. Hence, the similarity seen in the meanings of these words ‘kos’ and ‘kare’ in both these languages provides lead for further study. More than that, the ancient literary resources which are available in both these languages ensure the export and import of gold and spices from the ancient Greek land to Tamil land.Keywords: arrange, kare, Kos, ornament, Tamil
Procedia PDF Downloads 1501098 Combining an Optimized Closed Principal Curve-Based Method and Evolutionary Neural Network for Ultrasound Prostate Segmentation
Authors: Tao Peng, Jing Zhao, Yanqing Xu, Jing Cai
Abstract:
Due to missing/ambiguous boundaries between the prostate and neighboring structures, the presence of shadow artifacts, as well as the large variability in prostate shapes, ultrasound prostate segmentation is challenging. To handle these issues, this paper develops a hybrid method for ultrasound prostate segmentation by combining an optimized closed principal curve-based method and the evolutionary neural network; the former can fit curves with great curvature and generate a contour composed of line segments connected by sorted vertices, and the latter is used to express an appropriate map function (represented by parameters of evolutionary neural network) for generating the smooth prostate contour to match the ground truth contour. Both qualitative and quantitative experimental results showed that our proposed method obtains accurate and robust performances.Keywords: ultrasound prostate segmentation, optimized closed polygonal segment method, evolutionary neural network, smooth mathematical model, principal curve
Procedia PDF Downloads 2001097 A Comparative Study on the Positive and Negative of Electronic Word-of-Mouth on the SERVQUAL Scale-Take A Certain Armed Forces General Hospital in Taiwan As An Example
Authors: Po-Chun Lee, Li-Lin Liang, Ching-Yuan Huang
Abstract:
Purpose: Research on electronic word-of-mouth (eWOM)& online review has been widely used in service industry management research in recent years. The SERVQUAL scale is the most commonly used method to measure service quality. Therefore, the purpose of this research is to combine electronic word of mouth & online review with the SERVQUAL scale. To explore the comparative study of positive and negative electronic word-of-mouth reviews of a certain armed force general hospital in Taiwan. Data sources: This research obtained online word-of-mouth comment data on google maps from a military hospital in Taiwan in the past ten years through Internet data mining technology. Research methods: This study uses the semantic content analysis method to classify word-of-mouth reviews according to the revised PZB SERVQUAL scale. Then carry out statistical analysis. Results of data synthesis: The results of this study disclosed that the negative reviews of this military hospital in Taiwan have been increasing year by year. Under the COVID-19 epidemic, positive word-of-mouth has a downward trend. Among the five determiners of SERVQUAL of PZB, positive word-of-mouth reviews performed best in “Assurance,” with a positive review rate of 58.89%, Followed by 43.33% of “Responsiveness.” In negative word-of-mouth reviews, “Assurance” performed the worst, with a positive rate of 70.99%, followed by responsive 29.01%. Conclusions: The important conclusions of this study disclosed that the total number of electronic word-of-mouth reviews of the military hospital has revealed positive growth in recent years, and the positive word-of-mouth growth has revealed negative growth after the epidemic of COVID-19, while the negative word-of-mouth has grown substantially. Regardless of the positive and negative comments, what patients care most about is “Assurance” of the professional attitude and skills of the medical staff, which needs to be strengthened most urgently. In addition, good “Reliability” will help build positive word-of-mouth. However, poor “Responsiveness” can easily lead to the spread of negative word-of-mouth. This study suggests that the hospital should focus on these few service-oriented quality management and audits.Keywords: quality of medical service, electronic word-of-mouth, armed forces general hospital
Procedia PDF Downloads 1771096 New Segmentation of Piecewise Moving-Average Model by Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
This paper addresses the problem of the signal segmentation within a Bayesian framework by using reversible jump MCMC algorithm. The signal is modelled by piecewise constant Moving-Average (MA) model where the numbers of segments, the position of change-point, the order and the coefficient of the MA model for each segment are unknown. The reversible jump MCMC algorithm is then used to generate samples distributed according to the joint posterior distribution of the unknown parameters. These samples allow calculating some interesting features of the posterior distribution. The performance of the methodology is illustrated via several simulation results.Keywords: piecewise, moving-average model, reversible jump MCMC, signal segmentation
Procedia PDF Downloads 2271095 The Storm in Us All: An Etymological Study of Tempest
Authors: David N. Prihoda
Abstract:
This paper charts the history of the English word Tempest from its origins in Proto-Indo European to its modern usage as a term for storms, both literal and metaphorical. It does so by way of considering the word’s morphology, semiotics, and phonetics. It references numerous language studies and dictionaries to chronicle the word’s many steps along that path, from demarcation of measurement to assessment of time, all the way to an observation about the weather or the human psyche. The conclusive findings show that tempest has undergone numerous changes throughout its history, and these changes interestingly parallel its connotations as a symbol for both chaotic weather and the chaos of the human spiritKeywords: Tempest, etymology, language origins, English
Procedia PDF Downloads 1141094 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 2341093 On Musical Information Geometry with Applications to Sonified Image Analysis
Authors: Shannon Steinmetz, Ellen Gethner
Abstract:
In this paper, a theoretical foundation is developed for patterned segmentation of audio using the geometry of music and statistical manifold. We demonstrate image content clustering using conic space sonification. The algorithm takes a geodesic curve as a model estimator of the three-parameter Gamma distribution. The random variable is parameterized by musical centricity and centric velocity. Model parameters predict audio segmentation in the form of duration and frame count based on the likelihood of musical geometry transition. We provide an example using a database of randomly selected images, resulting in statistically significant clusters of similar image content.Keywords: sonification, musical information geometry, image, content extraction, automated quantification, audio segmentation, pattern recognition
Procedia PDF Downloads 2371092 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target
Authors: Vishal Raj, Noorhan Abbas
Abstract:
Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)
Procedia PDF Downloads 1071091 Computable Difference Matrix for Synonyms in the Holy Quran
Authors: Mohamed Ali Al Shaari, Khalid M. El Fitori
Abstract:
In the field of Quran Studies known as Ghareeb A Quran (the study of the meanings of strange words and structures in Holy Quran), it is difficult to distinguish some pragmatic meanings from conceptual meanings. One who wants to study this subject may need to look for a common usage between any two words or more; to understand general meaning, and sometimes may need to look for common differences between them, even if there are synonyms (word sisters). Some of the distinguished scholars of Arabic linguistics believe that there are no synonym words, they believe in varieties of meaning and multi-context usage. Based on this viewpoint, our method was designed to look for synonyms of a word, then the differences that distinct the word and their synonyms. There are many available books that use such a method e.g. synonyms books, dictionaries, glossaries, and some books on the interpretations of strange vocabulary of the Holy Quran, but it is difficult to look up words in these written works. For that reason, we proposed a logical entity, which we called Differences Matrix (DM). DM groups the synonyms words to extract the relations between them and to know the general meaning, which defines the skeleton of all word synonyms; this meaning is expressed by a word of its sisters. In Differences Matrix, we used the sisters(words) as titles for rows and columns, and in the obtained cells we tried to define the row title (word) by using column title (her sister), so the relations between sisters appear, the expected result is well defined groups of sisters for each word. We represented the obtained results formally, and used the defined groups as a base for building the ontology of the Holy Quran synonyms.Keywords: Quran, synonyms, differences matrix, ontology
Procedia PDF Downloads 4191090 Semantics of the Word “Nas” in the Verse 24 of Surah Al-Baqarah Based on Izutsus’ Semantic Field Theory
Authors: Seyedeh Khadijeh. Mirbazel, Masoumeh Arjmandi
Abstract:
Semantics is a linguistic approach and a scientific stream, and like all scientific streams, it is dynamic. The study of meaning is carried out in the broad semantic collections of words that form the discourse. In other words, meaning is not something that can be found in a word; rather, the formation of meaning is a process that takes place in a discourse as a whole. One of the contemporary semantic theories is Izutsu's Semantic Field Theory. According to this theory, the discovery of meaning depends on the function of words and takes place within the context of language. The purpose of this research is to identify the meaning of the word "Nas" in the discourse of verse 24 of Surah Al-Baqarah, which introduces "Nas" as the firewood of hell, but the translators have translated it as "people". The present research has investigated the semantic structure of the word "Nas" using the aforementioned theory through the descriptive-analytical method. In the process of investigation, by matching the semantic fields of the Quranic word "Nas", this research came to the conclusion that "Nas" implies those persons who have forgotten God and His covenant in believing in His Oneness. For this reason, God called them "Nas (the forgetful)" - the imperfect participle of the noun /næsiwoɔn/ in single trinity of Arabic language, which means “to forget”. Therefore, the intended meaning of "Nas" in the verses that have the word "Nas" is not equivalent to "People" which is a general noun.Keywords: Nas, people, semantics, semantic field theory.
Procedia PDF Downloads 1891089 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric
Procedia PDF Downloads 1691088 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior
Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang
Abstract:
Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method
Procedia PDF Downloads 3131087 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 231086 Reduction of Speckle Noise in Echocardiographic Images: A Survey
Authors: Fathi Kallel, Saida Khachira, Mohamed Ben Slima, Ahmed Ben Hamida
Abstract:
Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures.Keywords: medical image processing, ultrasound images, Speckle noise, image enhancement, speckle filtering, segmentation, snakes
Procedia PDF Downloads 5301085 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 3971084 An Investigation of the Effects of Word Length on Amblyopic Eye Movement during Reading
Authors: Yahya Maeni
Abstract:
It is well established that amblyopic patients have a reduced reading performance and oculomotor deficits. Word length has a significant impact on reading performance and eye movement behaviour during reading. As there no previous attempts to assess whether amblyopic eyes would be affected by word length while reading. This study aims to assess the effect of word length on amblyopic eye movement behaviour during reading including fixation duration, number of fixation and gaze duration. 21 adults with amblyopia and 21 age-matched controls participated in the study (age ± SD) (23.80 ± 4.66) for amblyopes and (24.20 ± 3.58) for Controls. Eye movement was recorded during reading binocularly using Eyelink 1000. Study was designed as 2 x 2 (amblyopia vs. control) x 2 lengths (4 letters, and 8 letters). Compared to controls, the amblyopic participants report significant longer duration of fixation, higher number of fixation and longer gaze duration for short words with far higher significant difference for long words. It could be concluded that eye movement in amblyopia during reading might be accounted for by the length of a word within a text and this could possible explanation of reduced reading performance among amblyopes. By understanding the effect of word length on amblyopia will shed light on reading deficits in amblyopia and help to determine the reading needs of amplyopes in educational and clinical settings.Keywords: amblyopia, eye movement, reading, fixation
Procedia PDF Downloads 1501083 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 3201082 Automatic Moment-Based Texture Segmentation
Authors: Tudor Barbu
Abstract:
An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.Keywords: image segmentation, moment-based, texture analysis, automatic classification, validation indexes
Procedia PDF Downloads 4161081 BiLex-Kids: A Bilingual Word Database for Children 5-13 Years Old
Authors: Aris R. Terzopoulos, Georgia Z. Niolaki, Lynne G. Duncan, Mark A. J. Wilson, Antonios Kyparissiadis, Jackie Masterson
Abstract:
As word databases for bilingual children are not available, researchers, educators and textbook writers must rely on monolingual databases. The aim of this study is thus to develop a bilingual word database, BiLex-kids, an online open access developmental word database for 5-13 year old bilingual children who learn Greek as a second language and have English as their dominant one. BiLex-kids is compiled from 120 Greek textbooks used in Greek-English bilingual education in the UK, USA and Australia, and provides word translations in the two languages, pronunciations in Greek, and psycholinguistic variables (e.g. Zipf, Frequency per million, Dispersion, Contextual Diversity, Neighbourhood size). After clearing the textbooks of non-relevant items (e.g. punctuation), algorithms were applied to extract the psycholinguistic indices for all words. As well as one total lexicon, the database produces values for all ages (one lexicon for each age) and for three age bands (one lexicon per age band: 5-8, 9-11, 12-13 years). BiLex-kids provides researchers with accurate figures for a wide range of psycholinguistic variables, making it a useful and reliable research tool for selecting stimuli to examine lexical processing among bilingual children. In addition, it offers children the opportunity to study word spelling, learn translations and listen to pronunciations in their second language. It further benefits educators in selecting age-appropriate words for teaching reading and spelling, while special educational needs teachers will have a resource to control the content of word lists when designing interventions for bilinguals with literacy difficulties.Keywords: bilingual children, psycholinguistics, vocabulary development, word databases
Procedia PDF Downloads 3111080 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance
Procedia PDF Downloads 3591079 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 831078 Intensifier as Changed from the Impolite Word in Thai
Authors: Methawee Yuttapongtada
Abstract:
Intensifier is the linguistic term and device that is generally found in different languages in order to enhance and give additional quantity, quality or emotion to the words of each language. In fact, each language in the world has both of the similar and dissimilar intensifying device. More specially, the wide variety of intensifying device is used for Thai language and one of those is usage of the impolite word or the word that used to mean something negative as intensifier. The data collection in this study was done throughout the spoken language style by collecting from intensifiers regarded as impolite words because these words as employed in the other contexts will be held as the rude, swear words or the words with negative meaning. Then, backward study to the past was done in order to consider the historical change. Explanation of the original meaning and the contexts of words use from the past till the present time were done by use of both textual documents and dictionaries available in different periods. It was found that regarding the semantics and pragmatic aspects, subjectification also is the significant motivation that changed the impolite words to intensifiers. At last, it can explain pathway of the semantic change of these very words undoubtedly. Moreover, it is found that use tendency in the impolite word or the word that used to mean something negative will more be increased and this phenomenon is commonly found in many languages in the world and results of this research may support to the belief that human language in the world is universal and the same still reflected that human has the fundamental thought as the same to each other basically.Keywords: impolite word, intensifier, Thai, semantic change
Procedia PDF Downloads 1811077 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 791076 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1311075 Brand Extension and Customer WOM: Evidence from the Sports Industry
Authors: Jim Shih-Chiao Chin, Yu Ting Yeh, Shui Lien Chen, Yi-Fen Tsai
Abstract:
his study is taking Adidas Company as the object, explored the brand awareness directly or indirectly affects brand affect and word of mouth. First, explored the brand awareness on category fit and image fit, and examined the influence of category fit and image fit on extension attitude. This study then designates the effect of extension attitude on brand affect and word-of-mouth. The relationship of brand awareness on brand affect and word-of-mouth was also explored. The study participants are people who have purchased Adidas extension products. A total of 700 valid questionnaires were collected and statistical software AMOS 20.0 was used to examine the research hypotheses by using structural equation modeling (SEM). Finally, theoretical implications and research directions are provided for future studies.Keywords: brand extension, brand awareness, product category fit, brand image fit, brand affect, word-of-mouth (WOM)
Procedia PDF Downloads 3321074 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality
Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye
Abstract:
When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.Keywords: word embeddings, k-mer embedding, dimensionality reduction
Procedia PDF Downloads 1371073 Data-Driven Market Segmentation in Hospitality Using Unsupervised Machine Learning
Authors: Rik van Leeuwen, Ger Koole
Abstract:
Within hospitality, marketing departments use segmentation to create tailored strategies to ensure personalized marketing. This study provides a data-driven approach by segmenting guest profiles via hierarchical clustering based on an extensive set of features. The industry requires understandable outcomes that contribute to adaptability for marketing departments to make data-driven decisions and ultimately driving profit. A marketing department specified a business question that guides the unsupervised machine learning algorithm. Features of guests change over time; therefore, there is a probability that guests transition from one segment to another. The purpose of the study is to provide steps in the process from raw data to actionable insights, which serve as a guideline for how hospitality companies can adopt an algorithmic approach.Keywords: hierarchical cluster analysis, hospitality, market segmentation
Procedia PDF Downloads 1081072 Market Segmentation and Conjoint Analysis for Apple Family Design
Authors: Abbas Al-Refaie, Nour Bata
Abstract:
A distributor of Apple products' experiences numerous difficulties in developing marketing strategies for new and existing mobile product entries that maximize customer satisfaction and the firm's profitability. This research, therefore, integrates market segmentation in platform-based product family design and conjoint analysis to identify iSystem combinations that increase customer satisfaction and business profits. First, the enhanced market segmentation grid is created. Then, the estimated demand model is formulated. Finally, the profit models are constructed then used to determine the ideal product family design that maximizes profit. Conjoint analysis is used to explore customer preferences with their satisfaction levels. A total of 200 surveys are collected about customer preferences. Then, simulation is used to determine the importance values for each attribute. Finally, sensitivity analysis is conducted to determine the product family design that maximizes both objectives. In conclusion, the results of this research shall provide great support to Apple distributors in determining the best marketing strategies that enhance their market share.Keywords: market segmentation, conjoint analysis, market strategies, optimization
Procedia PDF Downloads 371