Search results for: image dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3758

Search results for: image dataset

3368 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 298
3367 The "Street Less Traveled": Body Image and Its Relationship with Eating Attitudes, Influence of Media and Self-Esteem among College Students

Authors: Aditya Soni, Nimesh Parikh, R. A. Thakrar

Abstract:

Background: A cross-sectional study looked to focus body image satisfaction, heretofore under investigated arena in our setting. This study additionally examined the relationship of body mass index, influence of media and self-esteem. Our second objective was to assess whether there was any relationship between body image dissatisfaction and gender. Methods: A cross-sectional study using body image satisfaction described in words was undertaken, which also explored relationship with body mass index (BMI), influence of media, self-esteem and other selected co-variables such as socio-demographic details, overall satisfaction in life, and particularly in academic/professional life, current health status using 5-item based Likert scale. Convenience sampling was used to select participants of both genders aged from 17 to 32 on a sample size of 303 participants. Results : The body image satisfaction had significant relationship with Body mass index (P<0.001), eating attitude (P<0.001), influence of media (P<0.001) and self-esteem (P<0.001). Students with low weight had a significantly higher prevalence of body image satisfaction while overweight students had a significantly higher prevalence of dissatisfaction (P<0.001). Females showed more concern about body image as compared to males. Conclusions: Generally, this study reveals that the eating attitude, influence of the media and self-esteem is significantly related to the body image. On an empowering note, this level needs to be saved for overall mental and sound advancement of people. Proactive preventive measures could be started in foundations on identity improvement, acknowledgement of self and individual contrasts while keeping up ideal weight and dynamic life style.

Keywords: body image, body mass index, media, self-esteem

Procedia PDF Downloads 576
3366 Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing

Authors: S. Vignesh, K. S. Rangasamy

Abstract:

The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing.

Keywords: CCD, optics, image processing, D3CIP

Procedia PDF Downloads 359
3365 Real-Time Automated Detection of Violent Content in Animated Cartoons Using YOLOv9

Authors: Omaima Jbara, Mohame Amine Omrani, Mounir Zrigui

Abstract:

The detection of violent content in animated cartoons is anessential step toward safeguarding young audiences and promoting responsible media consumption. This study introduces an automated approach to identify violent scenes in cartoons using advanced object detection models. A custom dataset comprising 1,200 frames was curated from various animated sources, focusing on four key classes: Explosion, Blood, Fight, and Gunshot. Data augmentation techniques, including rotation, scaling, and color adjustments, expanded the dataset to 2,000 frames, enhancing diversity and model generalization. YOLO versions 8, 9, and 10 were trained and evaluated on this dataset. Among these, YOLOv9 achieved the highest performance with a mean Average Precision (mAP) of 94%, demonstrating superior accuracy and robustness. These findings highlight YOLOv9’s potential as a reliable tool for detecting violent content in animated media, contributing to the development of effective content moderation systems.

Keywords: cartoon violence detection, YOLO model, computer Vi sion, Real-time content analysis

Procedia PDF Downloads 11
3364 Urdu Text Extraction Method from Images

Authors: Samabia Tehsin, Sumaira Kausar

Abstract:

Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results.

Keywords: caption text, content-based image retrieval, document analysis, text extraction

Procedia PDF Downloads 519
3363 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 172
3362 Identification of How Pre-Service Physics Teachers Understand Image Formations through Virtual Objects in the Field of Geometric Optics and Development of a New Material to Exploit Virtual Objects

Authors: Ersin Bozkurt

Abstract:

The aim of the study is to develop materials for understanding image formations through virtual objects in geometric optics. The images in physics course books are formed by using real objects. This results in mistakes in the features of images because of generalizations which leads to conceptual misunderstandings in learning. In this study it was intended to identify pre-service physics teachers misunderstandings arising from false generalizations. Focused group interview was used as a qualitative method. The findings of the study show that students have several misconceptions such as "the image in a plain mirror is always virtual". However a real image can be formed in a plain mirror. To explain a virtual object's image formation in a more understandable way an overhead projector and episcope and their design was illustrated. The illustrations are original and several computer simulations will be suggested.

Keywords: computer simulations, geometric optics, physics education, students' misconceptions in physics

Procedia PDF Downloads 408
3361 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 344
3360 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 269
3359 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 337
3358 Binarized-Weight Bilateral Filter for Low Computational Cost Image Smoothing

Authors: Yu Zhang, Kohei Inoue, Kiichi Urahama

Abstract:

We propose a simplified bilateral filter with binarized coefficients for accelerating it. Its computational cost is further decreased by sampling pixels. This computationally low cost filter is useful for smoothing or denoising images by using mobile devices with limited computational power.

Keywords: bilateral filter, binarized-weight bilateral filter, image smoothing, image denoising, pixel sampling

Procedia PDF Downloads 473
3357 Review of the Software Used for 3D Volumetric Reconstruction of the Liver

Authors: P. Strakos, M. Jaros, T. Karasek, T. Kozubek, P. Vavra, T. Jonszta

Abstract:

In medical imaging, segmentation of different areas of human body like bones, organs, tissues, etc. is an important issue. Image segmentation allows isolating the object of interest for further processing that can lead for example to 3D model reconstruction of whole organs. Difficulty of this procedure varies from trivial for bones to quite difficult for organs like liver. The liver is being considered as one of the most difficult human body organ to segment. It is mainly for its complexity, shape versatility and proximity of other organs and tissues. Due to this facts usually substantial user effort has to be applied to obtain satisfactory results of the image segmentation. Process of image segmentation then deteriorates from automatic or semi-automatic to fairly manual one. In this paper, overview of selected available software applications that can handle semi-automatic image segmentation with further 3D volume reconstruction of human liver is presented. The applications are being evaluated based on the segmentation results of several consecutive DICOM images covering the abdominal area of the human body.

Keywords: image segmentation, semi-automatic, software, 3D volumetric reconstruction

Procedia PDF Downloads 294
3356 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video

Authors: Nidhal K. Azawi, John M. Gauch

Abstract:

Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.

Keywords: colonoscopy classification, feature extraction, image alignment, machine learning

Procedia PDF Downloads 254
3355 The Research of Culture Heritage Tourism Loyalty in Taiwan

Authors: Chih-Wen Wu

Abstract:

This study examines the antecedents of heritage tourism loyalty and its relation to destination image, consumer travel experience, and destination satisfaction in the tourism context. In this respect, a number of important questions concerning how destination image, consumer travel experience, and destination satisfaction impact destination loyalty are raised. This study attempts to identify three key antecedents of loyalty in the heritage context. The author empirically tests predicted relationships by using personal interview data from 475 foreign tourists. The conceptual model investigated the relevant relationships among the constructs by using confirmatory factor analysis(CFA) and structural equation modeling (SEM) approach. Findings from the research sample support the argument that destination image, consumer travel experience, destination satisfaction are the key determinants of destination loyalty. Destination image and consumer travel experience influence destination satisfaction. The author also discusses theoretical and managerial implications of research findings for marketing the heritage globally.

Keywords: heritage, destination loyalty, destination image, consumer travel experience, destination satisfaction, tourism

Procedia PDF Downloads 446
3354 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 143
3353 Hounsfield-Based Automatic Evaluation of Volumetric Breast Density on Radiotherapy CT-Scans

Authors: E. M. D. Akuoko, Eliana Vasquez Osorio, Marcel Van Herk, Marianne Aznar

Abstract:

Radiotherapy is an integral part of treatment for many patients with breast cancer. However, side effects can occur, e.g., fibrosis or erythema. If patients at higher risks of radiation-induced side effects could be identified before treatment, they could be given more individual information about the risks and benefits of radiotherapy. We hypothesize that breast density is correlated with the risk of side effects and present a novel method for automatic evaluation based on radiotherapy planning CT scans. Methods: 799 supine CT scans of breast radiotherapy patients were available from the REQUITE dataset. The methodology was first established in a subset of 114 patients (cohort 1) before being applied to the whole dataset (cohort 2). All patients were scanned in the supine position, with arms up, and the treated breast (ipsilateral) was identified. Manual experts contour available in 96 patients for both the ipsilateral and contralateral breast in cohort 1. Breast tissue was segmented using atlas-based automatic contouring software, ADMIRE® v3.4 (Elekta AB, Sweden). Once validated, the automatic segmentation method was applied to cohort 2. Breast density was then investigated by thresholding voxels within the contours, using Otsu threshold and pixel intensity ranges based on Hounsfield units (-200 to -100 for fatty tissue, and -99 to +100 for fibro-glandular tissue). Volumetric breast density (VBD) was defined as the volume of fibro-glandular tissue / (volume of fibro-glandular tissue + volume of fatty tissue). A sensitivity analysis was performed to verify whether calculated VBD was affected by the choice of breast contour. In addition, we investigated the correlation between volumetric breast density (VBD) and patient age and breast size. VBD values were compared between ipsilateral and contralateral breast contours. Results: Estimated VBD values were 0.40 (range 0.17-0.91) in cohort 1, and 0.43 (0.096-0.99) in cohort 2. We observed ipsilateral breasts to be denser than contralateral breasts. Breast density was negatively associated with breast volume (Spearman: R=-0.5, p-value < 2.2e-16) and age (Spearman: R=-0.24, p-value = 4.6e-10). Conclusion: VBD estimates could be obtained automatically on a large CT dataset. Patients’ age or breast volume may not be the only variables that explain breast density. Future work will focus on assessing the usefulness of VBD as a predictive variable for radiation-induced side effects.

Keywords: breast cancer, automatic image segmentation, radiotherapy, big data, breast density, medical imaging

Procedia PDF Downloads 136
3352 Development of Intelligent Construction Management System Using Web-Camera Image and 3D Object Image

Authors: Hyeon-Seung Kim, Bit-Na Cho, Tae-Woon Jeong, Soo-Young Yoon, Leen-Seok Kang

Abstract:

Recently, a construction project has been large in the size and complicated in the site work. The web-cameras are used to manage the construction site of such a large construction project. They can be used for monitoring the construction schedule as compared to the actual work image of the planned work schedule. Specially, because the 4D CAD system that the construction appearance is continually simulated in a 3D CAD object by work schedule is widely applied to the construction project, the comparison system between the real image of actual work appearance by web-camera and the simulated image of planned work appearance by 3D CAD object can be an intelligent construction schedule management system (ICON). The delayed activities comparing with the planned schedule can be simulated by red color in the ICON as a virtual reality object. This study developed the ICON and it was verified in a real bridge construction project in Korea. To verify the developed system, a web-camera was installed and operated in a case project for a month. Because the angle and zooming of the web-camera can be operated by Internet, a project manager can easily monitor and assume the corrective action.

Keywords: 4D CAD, web-camera, ICON (intelligent construction schedule management system), 3D object image

Procedia PDF Downloads 509
3351 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.

Keywords: autism disease, neural network, CPU, GPU, transfer learning

Procedia PDF Downloads 122
3350 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: simulation, visual navigation, mobile robot, data visualization

Procedia PDF Downloads 257
3349 Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review

Authors: Agastya Pratap Singh

Abstract:

Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices.

Keywords: generative adversarial networks, image recognition, anomaly detection, DCGAN, CycleGAN, StyleGAN, data augmentation

Procedia PDF Downloads 27
3348 Review of Ultrasound Image Processing Techniques for Speckle Noise Reduction

Authors: Kwazikwenkosi Sikhakhane, Suvendi Rimer, Mpho Gololo, Khmaies Oahada, Adnan Abu-Mahfouz

Abstract:

Medical ultrasound imaging is a crucial diagnostic technique due to its affordability and non-invasiveness compared to other imaging methods. However, the presence of speckle noise, which is a form of multiplicative noise, poses a significant obstacle to obtaining clear and accurate images in ultrasound imaging. Speckle noise reduces image quality by decreasing contrast, resolution, and signal-to-noise ratio (SNR). This makes it difficult for medical professionals to interpret ultrasound images accurately. To address this issue, various techniques have been developed to reduce speckle noise in ultrasound images, which improves image quality. This paper aims to review some of these techniques, highlighting the advantages and disadvantages of each algorithm and identifying the scenarios in which they work most effectively.

Keywords: image processing, noise, speckle, ultrasound

Procedia PDF Downloads 114
3347 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model

Authors: Ghazal Faraj, Andras Micsik

Abstract:

The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.

Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment

Procedia PDF Downloads 149
3346 Large-Capacity Image Information Reduction Based on Single-Cue Saliency Map for Retinal Prosthesis System

Authors: Yili Chen, Xiaokun Liang, Zhicheng Zhang, Yaoqin Xie

Abstract:

In an effort to restore visual perception in retinal diseases, an electronic retinal prosthesis with thousands of electrodes has been developed. The image processing strategies of retinal prosthesis system converts the original images from the camera to the stimulus pattern which can be interpreted by the brain. Practically, the original images are with more high resolution (256x256) than that of the stimulus pattern (such as 25x25), which causes a technical image processing challenge to do large-capacity image information reduction. In this paper, we focus on developing an efficient image processing stimulus pattern extraction algorithm by using a single cue saliency map for extracting salient objects in the image with an optimal trimming threshold. Experimental results showed that the proposed stimulus pattern extraction algorithm performs quite well for different scenes in terms of the stimulus pattern. In the algorithm performance experiment, our proposed SCSPE algorithm have almost five times of the score compared with Boyle’s algorithm. Through experiment s we suggested that when there are salient objects in the scene (such as the blind meet people or talking with people), the trimming threshold should be set around 0.4max, in other situations, the trimming threshold values can be set between 0.2max-0.4max to give the satisfied stimulus pattern.

Keywords: retinal prosthesis, image processing, region of interest, saliency map, trimming threshold selection

Procedia PDF Downloads 249
3345 Facial Emotion Recognition Using Deep Learning

Authors: Ashutosh Mishra, Nikhil Goyal

Abstract:

A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.

Keywords: facial recognition, computational intelligence, convolutional neural network, depth map

Procedia PDF Downloads 233
3344 The Impact of Upward Social Media Comparisons on Body Image and the Role of Physical Appearance Perfectionism and Cognitive Coping

Authors: Lauren Currell, Gemma Hurst

Abstract:

Introduction: The present study experimentally investigated the impact of attractive Instagram images on female’s body image. It also examined whether physical appearance perfectionism and cognitive coping predicted body image following upward comparisons to idealised bodies on Instagram. Methods: One-hundred and fifty-eight females (mean age 24.35 years) were randomly assigned to an experimental (where they compared their bodies to those of Instagram models) or control condition (where they critiqued landscape painting). All participants completed measures on physical appearance perfectionism, cognitive coping, and pre- and post-measures of body image. Results: Comparing one’s body to idealised bodies on Instagram resulted in increased appearance and weight dissatisfaction and decreased confidence, compared to the control condition. Physical appearance perfectionism and cognitive coping both predicted body image outcomes for the experimental condition. Discussion: Clinical implications, such as the prevention and treatment of body dissatisfaction, are discussed. Strengths and limitations of the current study are also noted, and suggestions for future research are provided.

Keywords: perfectionism, cognitive coping, body image, social media

Procedia PDF Downloads 100
3343 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 85
3342 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 252
3341 A Calibration Method for Temperature Distribution Measurement of Thermochromic Liquid Crystal Based on Mathematical Morphology of Hue Image

Authors: Risti Suryantari, Flaviana

Abstract:

The aim of this research is to design calibration method of Thermochromic Liquid Crystal for temperature distribution measurement based on mathematical morphology of hue image A glass of water is placed on the surface of sample TLC R25C5W at certain temperature. We use scanner for image acquisition. The true images in RGB format is converted to HSV (hue, saturation, value) by taking of hue without saturation and value. Then the hue images is processed based on mathematical morphology using Matlab2013a software to get better images. There are differences on the final images after processing at each temperature variation based on visualization observation and the statistic value. The value of maximum and mean increase with rising temperature. It could be parameter to identify the temperature of the human body surface like hand or foot surface.

Keywords: thermochromic liquid crystal, TLC, mathematical morphology, hue image

Procedia PDF Downloads 479
3340 Evaluation of Condyle Alterations after Orthognathic Surgery with a Digital Image Processing Technique

Authors: Livia Eisler, Cristiane C. B. Alves, Cristina L. F. Ortolani, Kurt Faltin Jr.

Abstract:

Purpose: This paper proposes a technically simple diagnosis method among orthodontists and maxillofacial surgeons in order to evaluate discrete bone alterations. The methodology consists of a protocol to optimize the diagnosis and minimize the possibility for orthodontic and ortho-surgical retreatment. Materials and Methods: A protocol of image processing and analysis, through ImageJ software and its plugins, was applied to 20 pairs of lateral cephalometric images obtained from cone beam computerized tomographies, before and 1 year after undergoing orthognathic surgery. The optical density of the images was analyzed in the condylar region to determine possible bone alteration after surgical correction. Results: Image density was shown to be altered in all image pairs, especially regarding the condyle contours. According to measures, condyle had a gender-related density reduction for p=0.05 and condylar contours had their alterations registered in mm. Conclusion: A simple, viable and cost-effective technique can be applied to achieve the more detailed image-based diagnosis, not depending on the human eye and therefore, offering more reliable, quantitative results.

Keywords: bone resorption, computer-assisted image processing, orthodontics, orthognathic surgery

Procedia PDF Downloads 163
3339 A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images

Authors: Revoti Prasad Bora, Nikita Katyal, Saurabh Yadav

Abstract:

Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques.

Keywords: deep-learning, classification, pre-processing, computer vision, image processing, educational data mining

Procedia PDF Downloads 167