Search results for: image dataset
2953 Hate Speech Detection in Tunisian Dialect
Authors: Helmi Baazaoui, Mounir Zrigui
Abstract:
This study addresses the challenge of hate speech detection in Tunisian Arabic text, a critical issue for online safety and moderation. Leveraging the strengths of the AraBERT model, we fine-tuned and evaluated its performance against the Bi-LSTM model across four distinct datasets: T-HSAB, TNHS, TUNIZI-Dataset, and a newly compiled dataset with diverse labels such as Offensive Language, Racism, and Religious Intolerance. Our experimental results demonstrate that AraBERT significantly outperforms Bi-LSTM in terms of Recall, Precision, F1-Score, and Accuracy across all datasets. The findings underline the robustness of AraBERT in capturing the nuanced features of Tunisian Arabic and its superior capability in classification tasks. This research not only advances the technology for hate speech detection but also provides practical implications for social media moderation and policy-making in Tunisia. Future work will focus on expanding the datasets and exploring more sophisticated architectures to further enhance detection accuracy, thus promoting safer online interactions.Keywords: hate speech detection, Tunisian Arabic, AraBERT, Bi-LSTM, Gemini annotation tool, social media moderation
Procedia PDF Downloads 202952 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis
Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana
Abstract:
Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis
Procedia PDF Downloads 1292951 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset
Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli
Abstract:
Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence
Procedia PDF Downloads 792950 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 1452949 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging
Authors: Mohammad Esmaeilpour
Abstract:
One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions
Procedia PDF Downloads 4792948 Scar Removal Stretegy for Fingerprint Using Diffusion
Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong
Abstract:
Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion
Procedia PDF Downloads 5182947 Small Text Extraction from Documents and Chart Images
Authors: Rominkumar Busa, Shahira K. C., Lijiya A.
Abstract:
Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.Keywords: small text extraction, OCR, scene text recognition, CRNN
Procedia PDF Downloads 1292946 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 842945 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 822944 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset
Authors: Gabriele Borg, Alexei Debono, Charlie Abela
Abstract:
There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.Keywords: graph neural networks, traffic management, big data, mobile data patterns
Procedia PDF Downloads 1372943 Agile Software Effort Estimation Using Regression Techniques
Authors: Mikiyas Adugna
Abstract:
Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.Keywords: agile software development, effort estimation, elastic net regression, LASSO
Procedia PDF Downloads 752942 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification
Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine
Abstract:
Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.Keywords: convolution, feature extraction, image analysis, validation, precision agriculture
Procedia PDF Downloads 3192941 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 1032940 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models
Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel
Abstract:
In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids
Procedia PDF Downloads 3822939 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 1032938 A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories
Authors: Nabilah Ibrahim, Khaliza Musa
Abstract:
The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI.Keywords: B-mode Ultrasound Image, carotid artery diameter, canny edge detection, body mass index
Procedia PDF Downloads 4482937 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 3442936 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis
Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar
Abstract:
Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.Keywords: NLP, multilingual, sentiment analysis, texts
Procedia PDF Downloads 1112935 Enhanced Extra Trees Classifier for Epileptic Seizure Prediction
Authors: Maurice Ntahobari, Levin Kuhlmann, Mario Boley, Zhinoos Razavi Hesabi
Abstract:
For machine learning based epileptic seizure prediction, it is important for the model to be implemented in small implantable or wearable devices that can be used to monitor epilepsy patients; however, current state-of-the-art methods are complex and computationally intensive. We use Shapley Additive Explanation (SHAP) to find relevant intracranial electroencephalogram (iEEG) features and improve the computational efficiency of a state-of-the-art seizure prediction method based on the extra trees classifier while maintaining prediction performance. Results for a small contest dataset and a much larger dataset with continuous recordings of up to 3 years per patient from 15 patients yield better than chance prediction performance (p < 0.004). Moreover, while the performance of the SHAP-based model is comparable to that of the benchmark, the overall training and prediction time of the model has been reduced by a factor of 1.83. It can also be noted that the feature called zero crossing value is the best EEG feature for seizure prediction. These results suggest state-of-the-art seizure prediction performance can be achieved using efficient methods based on optimal feature selection.Keywords: machine learning, seizure prediction, extra tree classifier, SHAP, epilepsy
Procedia PDF Downloads 1172934 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 202933 Short Text Classification for Saudi Tweets
Authors: Asma A. Alsufyani, Maram A. Alharthi, Maha J. Althobaiti, Manal S. Alharthi, Huda Rizq
Abstract:
Twitter is one of the most popular microblogging sites that allows users to publish short text messages called 'tweets'. Increasing the number of accounts to follow (followings) increases the number of tweets that will be displayed from different topics in an unclassified manner in the timeline of the user. Therefore, it can be a vital solution for many Twitter users to have their tweets in a timeline classified into general categories to save the user’s time and to provide easy and quick access to tweets based on topics. In this paper, we developed a classifier for timeline tweets trained on a dataset consisting of 3600 tweets in total, which were collected from Saudi Twitter and annotated manually. We experimented with the well-known Bag-of-Words approach to text classification, and we used support vector machines (SVM) in the training process. The trained classifier performed well on a test dataset, with an average F1-measure equal to 92.3%. The classifier has been integrated into an application, which practically proved the classifier’s ability to classify timeline tweets of the user.Keywords: corpus creation, feature extraction, machine learning, short text classification, social media, support vector machine, Twitter
Procedia PDF Downloads 1592932 'Low Electronic Noise' Detector Technology in Computed Tomography
Authors: A. Ikhlef
Abstract:
Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector
Procedia PDF Downloads 1292931 3D Remote Sensing Images Parallax Refining Based On HTML5
Authors: Qian Pei, Hengjian Tong, Weitao Chen, Hai Wang, Yanrong Feng
Abstract:
Horizontal parallax is the foundation of stereoscopic viewing. However, the human eye will feel uncomfortable and it will occur diplopia if horizontal parallax is larger than eye separation. Therefore, we need to do parallax refining before conducting stereoscopic observation. Although some scholars have been devoted to online remote sensing refining, the main work of image refining is completed on the server side. There will be a significant delay when multiple users access the server at the same time. The emergence of HTML5 technology in recent years makes it possible to develop rich browser web application. Authors complete the image parallax refining on the browser side based on HTML5, while server side only need to transfer image data and parallax file to browser side according to the browser’s request. In this way, we can greatly reduce the server CPU load and allow a large number of users to access server in parallel and respond the user’s request quickly.Keywords: 3D remote sensing images, parallax, online refining, rich browser web application, HTML5
Procedia PDF Downloads 4652930 Velocity Distribution in Open Channels with Sand: An Experimental Study
Authors: E. Keramaris
Abstract:
In this study, laboratory experiments in open channel flows over a sand bed were conducted. A porous bed (sand bed) with porosity of ε=0.70 and porous thickness of s΄=3 cm was tested. Vertical distributions of velocity were evaluated by using a two-dimensional (2D) Particle Image Velocimetry (PIV). Velocity profiles are measured above the impermeable bed and above the sand bed for the same different total water heights (h= 6, 8, 10 and 12 cm) and for the same slope S=1.5. Measurements of mean velocity indicate the effects of the bed material used (sand bed) on the flow characteristics (Velocity distribution and Reynolds number) in comparison with those above the impermeable bed.Keywords: particle image velocimetry, sand bed, velocity distribution, Reynolds number
Procedia PDF Downloads 3772929 Investigation of Martensitic Transformation Zone at the Crack Tip of NiTi under Mode-I Loading Using Microscopic Image Correlation
Authors: Nima Shafaghi, Gunay Anlaş, C. Can Aydiner
Abstract:
A realistic understanding of martensitic phase transition under complex stress states is key for accurately describing the mechanical behavior of shape memory alloys (SMAs). Particularly regarding the sharply changing stress fields at the tip of a crack, the size, nature and shape of transformed zones are of great interest. There is significant variation among various analytical models in their predictions of the size and shape of the transformation zone. As the fully transformed region remains inside a very small boundary at the tip of the crack, experimental validation requires microscopic resolution. Here, the crack tip vicinity of NiTi compact tension specimen has been monitored in situ with microscopic image correlation with 20x magnification. With nominal 15 micrometer grains and 0.2 micrometer per pixel optical resolution, the strains at the crack tip are mapped with intra-grain detail. The transformation regions are then deduced using an equivalent strain formulation.Keywords: digital image correlation, fracture, martensitic phase transition, mode I, NiTi, transformation zone
Procedia PDF Downloads 3552928 Noninvasive Evaluation of Acupuncture by Measuring Facial Temperature through Thermal Image
Authors: An Guo, Hieyong Jeong, Tianyi Wang, Na Li, Yuko Ohno
Abstract:
Acupuncture, known as sensory simulation, has been used to treat various disorders for thousands of years. However, present studies had not addressed approaches for noninvasive measurement in order to evaluate therapeutic effect of acupuncture. The purpose of this study is to propose a noninvasive method to evaluate acupuncture by measuring facial temperature through thermal image. Three human subjects were recruited in this study. Each subject received acupuncture therapy for 30 mins. Acupuncture needles (Ø0.16 x 30 mm) were inserted into Baihui point (DU20), Neiguan points (PC6) and Taichong points (LR3), acupuncture needles (Ø0.18 x 39 mm) were inserted into Tanzhong point (RN17), Zusanli points (ST36) and Yinlingquan points (SP9). Facial temperature was recorded by an infrared thermometer. Acupuncture therapeutic effect was compared pre- and post-acupuncture. Experiment results demonstrated that facial temperature changed according to acupuncture therapeutic effect. It was concluded that proposed method showed high potential to evaluate acupuncture by noninvasive measurement of facial temperature.Keywords: acupuncture, facial temperature, noninvasive evaluation, thermal image
Procedia PDF Downloads 1902927 A Comparison between Different Segmentation Techniques Used in Medical Imaging
Authors: Ibtihal D. Mustafa, Mawia A. Hassan
Abstract:
Tumor segmentation from MRI image is important part of medical images experts. This is particularly a challenging task because of the high assorting appearance of tumor tissue among different patients. MRI images are advance of medical imaging because it is give richer information about human soft tissue. There are different segmentation techniques to detect MRI brain tumor. In this paper, different procedure segmentation methods are used to segment brain tumors and compare the result of segmentations by using correlation and structural similarity index (SSIM) to analysis and see the best technique that could be applied to MRI image.Keywords: MRI, segmentation, correlation, structural similarity
Procedia PDF Downloads 4122926 Meta-Learning for Hierarchical Classification and Applications in Bioinformatics
Authors: Fabio Fabris, Alex A. Freitas
Abstract:
Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for recommending the best hierarchical classification algorithm to a hierarchical classification dataset. This work’s contributions are: 1) proposing an algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.Keywords: algorithm recommendation, meta-learning, bioinformatics, hierarchical classification
Procedia PDF Downloads 3192925 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 2942924 Applying Unmanned Aerial Vehicle on Agricultural Damage: A Case Study of the Meteorological Disaster on Taiwan Paddy Rice
Authors: Chiling Chen, Chiaoying Chou, Siyang Wu
Abstract:
Taiwan locates at the west of Pacific Ocean and intersects between continental and marine climate. Typhoons frequently strike Taiwan and come with meteorological disasters, i.e., heavy flooding, landslides, loss of life and properties, etc. Global climate change brings more extremely meteorological disasters. So, develop techniques to improve disaster prevention and mitigation is needed, to improve rescue processes and rehabilitations is important as well. In this study, UAVs (Unmanned Aerial Vehicles) are applied to take instant images for improving the disaster investigation and rescue processes. Paddy rice fields in the central Taiwan are the study area. There have been attacked by heavy rain during the monsoon season in June 2016. UAV images provide the high ground resolution (3.5cm) with 3D Point Clouds to develop image discrimination techniques and digital surface model (DSM) on rice lodging. Firstly, image supervised classification with Maximum Likelihood Method (MLD) is used to delineate the area of rice lodging. Secondly, 3D point clouds generated by Pix4D Mapper are used to develop DSM for classifying the lodging levels of paddy rice. As results, discriminate accuracy of rice lodging is 85% by image supervised classification, and the classification accuracy of lodging level is 87% by DSM. Therefore, UAVs not only provide instant images of agricultural damage after the meteorological disaster, but the image discriminations on rice lodging also reach acceptable accuracy (>85%). In the future, technologies of UAVs and image discrimination will be applied to different crop fields. The results of image discrimination will be overlapped with administrative boundaries of paddy rice, to establish GIS-based assist system on agricultural damage discrimination. Therefore, the time and labor would be greatly reduced on damage detection and monitoring.Keywords: Monsoon, supervised classification, Pix4D, 3D point clouds, discriminate accuracy
Procedia PDF Downloads 303