Search results for: hyperspectral image classification using tree search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9295

Search results for: hyperspectral image classification using tree search algorithm

9025 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 82
9024 Parallel 2-Opt Local Search on GPU

Authors: Wen-Bao Qiao, Jean-Charles Créput

Abstract:

To accelerate the solution for large scale traveling salesman problems (TSP), a parallel 2-opt local search algorithm with simple implementation based on Graphics Processing Unit (GPU) is presented and tested in this paper. The parallel scheme is based on technique of data decomposition by dynamically assigning multiple K processors on the integral tour to treat K edges’ 2-opt local optimization simultaneously on independent sub-tours, where K can be user-defined or have a function relationship with input size N. We implement this algorithm with doubly linked list on GPU. The implementation only requires O(N) memory. We compare this parallel 2-opt local optimization against sequential exhaustive 2-opt search along integral tour on TSP instances from TSPLIB with more than 10000 cities.

Keywords: parallel 2-opt, double links, large scale TSP, GPU

Procedia PDF Downloads 596
9023 Improved Image Retrieval for Efficient Localization in Urban Areas Using Location Uncertainty Data

Authors: Mahdi Salarian, Xi Xu, Rashid Ansari

Abstract:

Accurate localization of mobile devices based on camera-acquired visual media information usually requires a search over a very large GPS-referenced image database. This paper proposes an efficient method for limiting the search space for image retrieval engine by extracting and leveraging additional media information about Estimated Positional Error (EP E) to address complexity and accuracy issues in the search, especially to be used for compensating GPS location inaccuracy in dense urban areas. The improved performance is achieved by up to a hundred-fold reduction in the search area used in available reference methods while providing improved accuracy. To test our procedure we created a database by acquiring Google Street View (GSV) images for down town of Chicago. Other available databases are not suitable for our approach due to lack of EP E for the query images. We tested the procedure using more than 200 query images along with EP E acquired mostly in the densest areas of Chicago with different phones and in different conditions such as low illumination and from under rail tracks. The effectiveness of our approach and the effect of size and sector angle of the search area are discussed and experimental results demonstrate how our proposed method can improve performance just by utilizing a data that is available for mobile systems such as smart phones.

Keywords: localization, retrieval, GPS uncertainty, bag of word

Procedia PDF Downloads 261
9022 Cuckoo Search Optimization for Black Scholes Option Pricing

Authors: Manas Shah

Abstract:

Black Scholes option pricing model is one of the most important concepts in modern world of computational finance. However, its practical use can be challenging as one of the input parameters must be estimated; implied volatility of the underlying security. The more precisely these values are estimated, the more accurate their corresponding estimates of theoretical option prices would be. Here, we present a novel model based on Cuckoo Search Optimization (CS) which finds more precise estimates of implied volatility than Particle Swarm Optimization (PSO) and Genetic Algorithm (GA).

Keywords: black scholes model, cuckoo search optimization, particle swarm optimization, genetic algorithm

Procedia PDF Downloads 430
9021 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures

Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara

Abstract:

The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.

Keywords: IoT, fog computing, task offloading, efficient crow search algorithm

Procedia PDF Downloads 17
9020 A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry

Authors: Dongxu Chen, Yipeng Li

Abstract:

This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods.

Keywords: image denoising, Poisson noise, information geometry, nonlocal-means

Procedia PDF Downloads 263
9019 Comparison of Various Classification Techniques Using WEKA for Colon Cancer Detection

Authors: Beema Akbar, Varun P. Gopi, V. Suresh Babu

Abstract:

Colon cancer causes the deaths of about half a million people every year. The common method of its detection is histopathological tissue analysis, it leads to tiredness and workload to the pathologist. A novel method is proposed that combines both structural and statistical pattern recognition used for the detection of colon cancer. This paper presents a comparison among the different classifiers such as Multilayer Perception (MLP), Sequential Minimal Optimization (SMO), Bayesian Logistic Regression (BLR) and k-star by using classification accuracy and error rate based on the percentage split method. The result shows that the best algorithm in WEKA is MLP classifier with an accuracy of 83.333% and kappa statistics is 0.625. The MLP classifier which has a lower error rate, will be preferred as more powerful classification capability.

Keywords: colon cancer, histopathological image, structural and statistical pattern recognition, multilayer perception

Procedia PDF Downloads 550
9018 Water Detection in Aerial Images Using Fuzzy Sets

Authors: Caio Marcelo Nunes, Anderson da Silva Soares, Gustavo Teodoro Laureano, Clarimar Jose Coelho

Abstract:

This paper presents a methodology to pixel recognition in aerial images using fuzzy $c$-means algorithm. This algorithm is a alternative to recognize areas considering uncertainties and inaccuracies. Traditional clustering technics are used in recognizing of multispectral images of earth's surface. This technics recognize well-defined borders that can be easily discretized. However, in the real world there are many areas with uncertainties and inaccuracies which can be mapped by clustering algorithms that use fuzzy sets. The methodology presents in this work is applied to multispectral images obtained from Landsat-5/TM satellite. The pixels are joined using the $c$-means algorithm. After, a classification process identify the types of surface according the patterns obtained from spectral response of image surface. The classes considered are, exposed soil, moist soil, vegetation, turbid water and clean water. The results obtained shows that the fuzzy clustering identify the real type of the earth's surface.

Keywords: aerial images, fuzzy clustering, image processing, pattern recognition

Procedia PDF Downloads 444
9017 Determination of Water Pollution and Water Quality with Decision Trees

Authors: Çiğdem Bakır, Mecit Yüzkat

Abstract:

With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.

Keywords: decision tree, water quality, water pollution, machine learning

Procedia PDF Downloads 63
9016 Emotional Analysis for Text Search Queries on Internet

Authors: Gemma García López

Abstract:

The goal of this study is to analyze if search queries carried out in search engines such as Google, can offer emotional information about the user that performs them. Knowing the emotional state in which the Internet user is located can be a key to achieve the maximum personalization of content and the detection of worrying behaviors. For this, two studies were carried out using tools with advanced natural language processing techniques. The first study determines if a query can be classified as positive, negative or neutral, while the second study extracts emotional content from words and applies the categorical and dimensional models for the representation of emotions. In addition, we use search queries in Spanish and English to establish similarities and differences between two languages. The results revealed that text search queries performed by users on the Internet can be classified emotionally. This allows us to better understand the emotional state of the user at the time of the search, which could involve adapting the technology and personalizing the responses to different emotional states.

Keywords: emotion classification, text search queries, emotional analysis, sentiment analysis in text, natural language processing

Procedia PDF Downloads 118
9015 Improving Load Frequency Control of Multi-Area Power System by Considering Uncertainty by Using Optimized Type 2 Fuzzy Pid Controller with the Harmony Search Algorithm

Authors: Mehrdad Mahmudizad, Roya Ahmadi Ahangar

Abstract:

This paper presents the method of designing the type 2 fuzzy PID controllers in order to solve the problem of Load Frequency Control (LFC). The Harmony Search (HS) algorithm is used to regulate the measurement factors and the effect of uncertainty of membership functions of Interval Type 2 Fuzzy Proportional Integral Differential (IT2FPID) controllers in order to reduce the frequency deviation resulted from the load oscillations. The simulation results implicitly show that the performance of the proposed IT2FPID LFC in terms of error, settling time and resistance against different load oscillations is more appropriate and preferred than PID and Type 1 Fuzzy Proportional Integral Differential (T1FPID) controllers.

Keywords: load frequency control, fuzzy-pid controller, type 2 fuzzy system, harmony search algorithm

Procedia PDF Downloads 241
9014 A Comparison of South East Asian Face Emotion Classification based on Optimized Ellipse Data Using Clustering Technique

Authors: M. Karthigayan, M. Rizon, Sazali Yaacob, R. Nagarajan, M. Muthukumaran, Thinaharan Ramachandran, Sargunam Thirugnanam

Abstract:

In this paper, using a set of irregular and regular ellipse fitting equations using Genetic algorithm (GA) are applied to the lip and eye features to classify the human emotions. Two South East Asian (SEA) faces are considered in this work for the emotion classification. There are six emotions and one neutral are considered as the output. Each subject shows unique characteristic of the lip and eye features for various emotions. GA is adopted to optimize irregular ellipse characteristics of the lip and eye features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse based fitness equations are proposed for the lip configuration and relevant parameters that define the emotions are listed. The GA method has achieved reasonably successful classification of emotion. In some emotions classification, optimized data values of one emotion are messed or overlapped to other emotion ranges. In order to overcome the overlapping problem between the emotion optimized values and at the same time to improve the classification, a fuzzy clustering method (FCM) of approach has been implemented to offer better classification. The GA-FCM approach offers a reasonably good classification within the ranges of clusters and it had been proven by applying to two SEA subjects and have improved the classification rate.

Keywords: ellipse fitness function, genetic algorithm, emotion recognition, fuzzy clustering

Procedia PDF Downloads 526
9013 A Hybrid Pareto-Based Swarm Optimization Algorithm for the Multi-Objective Flexible Job Shop Scheduling Problems

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a new hybrid particle swarm optimization algorithm is proposed for the multi-objective flexible job shop scheduling problem that is very important and hard combinatorial problem. The Pareto approach is used for solving the multi-objective problem. Several new local search heuristics are integrated into an algorithm based on the critical block concept to enhance the performance of the algorithm. The algorithm is compared with the recently published multi-objective algorithms based on benchmarks selected from the literature. Several metrics are used for quantifying performance and comparison of the achieved solutions. The algorithms are also compared based on the Weighting summation of objectives approach. The proposed algorithm can find the Pareto solutions more efficiently than the compared algorithms in less computational time.

Keywords: swarm-based optimization, local search, Pareto optimality, flexible job shop scheduling, multi-objective optimization

Procedia PDF Downloads 348
9012 Novel Algorithm for Restoration of Retina Images

Authors: P. Subbuthai, S. Muruganand

Abstract:

Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation.

Keywords: fundus retina image, diabetic retinopathy, median filter, microaneurysms, exudates

Procedia PDF Downloads 316
9011 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 325
9010 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service

Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.

Keywords: medical image, QoS, simulated annealing, Tabu search, telemedicine

Procedia PDF Downloads 193
9009 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.

Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification

Procedia PDF Downloads 211
9008 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems

Authors: Jason Digalakis, John Cotronis

Abstract:

Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.

Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems

Procedia PDF Downloads 58
9007 Data-Centric Anomaly Detection with Diffusion Models

Authors: Sheldon Liu, Gordon Wang, Lei Liu, Xuefeng Liu

Abstract:

Anomaly detection, also referred to as one-class classification, plays a crucial role in identifying product images that deviate from the expected distribution. This study introduces Data-centric Anomaly Detection with Diffusion Models (DCADDM), presenting a systematic strategy for data collection and further diversifying the data with image generation via diffusion models. The algorithm addresses data collection challenges in real-world scenarios and points toward data augmentation with the integration of generative AI capabilities. The paper explores the generation of normal images using diffusion models. The experiments demonstrate that with 30% of the original normal image size, modeling in an unsupervised setting with state-of-the-art approaches can achieve equivalent performances. With the addition of generated images via diffusion models (10% equivalence of the original dataset size), the proposed algorithm achieves better or equivalent anomaly localization performance.

Keywords: diffusion models, anomaly detection, data-centric, generative AI

Procedia PDF Downloads 47
9006 Applications of Hyperspectral Remote Sensing: A Commercial Perspective

Authors: Tuba Zahra, Aakash Parekh

Abstract:

Hyperspectral remote sensing refers to imaging of objects or materials in narrow conspicuous spectral bands. Hyperspectral images (HSI) enable the extraction of spectral signatures for objects or materials observed. These images contain information about the reflectance of each pixel across the electromagnetic spectrum. It enables the acquisition of data simultaneously in hundreds of spectral bands with narrow bandwidths and can provide detailed contiguous spectral curves that traditional multispectral sensors cannot offer. The contiguous, narrow bandwidth of hyperspectral data facilitates the detailed surveying of Earth's surface features. This would otherwise not be possible with the relatively coarse bandwidths acquired by other types of imaging sensors. Hyperspectral imaging provides significantly higher spectral and spatial resolution. There are several use cases that represent the commercial applications of hyperspectral remote sensing. Each use case represents just one of the ways that hyperspectral satellite imagery can support operational efficiency in the respective vertical. There are some use cases that are specific to VNIR bands, while others are specific to SWIR bands. This paper discusses the different commercially viable use cases that are significant for HSI application areas, such as agriculture, mining, oil and gas, defense, environment, and climate, to name a few. Theoretically, there is n number of use cases for each of the application areas, but an attempt has been made to streamline the use cases depending upon economic feasibility and commercial viability and present a review of literature from this perspective. Some of the specific use cases with respect to agriculture are crop species (sub variety) detection, soil health mapping, pre-symptomatic crop disease detection, invasive species detection, crop condition optimization, yield estimation, and supply chain monitoring at scale. Similarly, each of the industry verticals has a specific commercially viable use case that is discussed in the paper in detail.

Keywords: agriculture, mining, oil and gas, defense, environment and climate, hyperspectral, VNIR, SWIR

Procedia PDF Downloads 47
9005 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 68
9004 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering

Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song

Abstract:

The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.

Keywords: contour filtering, linear array, photoacoustic tomography, universal back projection

Procedia PDF Downloads 378
9003 Improved Color-Based K-Mean Algorithm for Clustering of Satellite Image

Authors: Sangeeta Yadav, Mantosh Biswas

Abstract:

In this paper, we proposed an improved color based K-mean algorithm for clustering of satellite Image (SAR). Our method comprises of two stages. The first step is an interactive selection process where users are required to input the number of colors (ncolor), number of clusters, and then they are prompted to select the points in each color cluster. In the second step these points are given as input to K-mean clustering algorithm that clusters the image based on color and Minimum Square Euclidean distance. The proposed method reduces the mixed pixel problem to a great extent.

Keywords: cluster, ncolor method, K-mean method, interactive selection process

Procedia PDF Downloads 263
9002 Forecasting Unusual Infection of Patient Used by Irregular Weighted Point Set

Authors: Seema Vaidya

Abstract:

Mining association rule is a key issue in data mining. In any case, the standard models ignore the distinction among the exchanges, and the weighted association rule mining does not transform on databases with just binary attributes. This paper proposes a novel continuous example and executes a tree (FP-tree) structure, which is an increased prefix-tree structure for securing compacted, discriminating data about examples, and makes a fit FP-tree-based mining system, FP enhanced capacity algorithm is used, for mining the complete game plan of examples by illustration incessant development. Here, this paper handles the motivation behind making remarkable and weighted item sets, i.e. rare weighted item set mining issue. The two novel brightness measures are proposed for figuring the infrequent weighted item set mining issue. Also, the algorithm are handled which perform IWI which is more insignificant IWI mining. Moreover we utilized the rare item set for choice based structure. The general issue of the start of reliable definite rules is troublesome for the grounds that hypothetically no inciting technique with no other person can promise the rightness of influenced theories. In this way, this framework expects the disorder with the uncommon signs. Usage study demonstrates that proposed algorithm upgrades the structure which is successful and versatile for mining both long and short diagnostics rules. Structure upgrades aftereffects of foreseeing rare diseases of patient.

Keywords: association rule, data mining, IWI mining, infrequent item set, frequent pattern growth

Procedia PDF Downloads 380
9001 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: food (ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods

Procedia PDF Downloads 404
9000 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 48
8999 Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image

Authors: Nur Nadhirah Rusyda Rosnan, Nursuhaili Najwa Masrol, Nurul Fatiha MD Nor, Mohammad Zafrullah Mohammad Salim, Sim Choon Cheak

Abstract:

Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91.

Keywords: immature palm count, oil palm, precision agriculture, remote sensing

Procedia PDF Downloads 41
8998 Optimal Voltage and Frequency Control of a Microgrid Using the Harmony Search Algorithm

Authors: Hossein Abbasi

Abstract:

The stability is an important topic to plan and manage the energy in the microgrids as the same as the conventional power systems. The voltage and frequency stability is one of the most important issues recently studied in microgrids. The objectives of this paper are the modelling and designing of the components and optimal controllers for the voltage and frequency control of the AC/DC hybrid microgrid under the different disturbances. Since the PI controllers have the advantages of simple structure and easy implementation, so they are designed and modeled in this paper. The harmony search (HS) algorithm is used to optimize the controllers’ parameters. According to the achieved results, the PI controllers have a good performance in voltage and frequency control of the microgrid.

Keywords: frequency control, HS algorithm, microgrid, PI controller, voltage control

Procedia PDF Downloads 365
8997 Algorithms for Fast Computation of Pan Matrix Profiles of Time Series Under Unnormalized Euclidean Distances

Authors: Jing Zhang, Daniel Nikovski

Abstract:

We propose an approximation algorithm called LINKUMP to compute the Pan Matrix Profile (PMP) under the unnormalized l∞ distance (useful for value-based similarity search) using double-ended queue and linear interpolation. The algorithm has comparable time/space complexities as the state-of-the-art algorithm for typical PMP computation under the normalized l₂ distance (useful for shape-based similarity search). We validate its efficiency and effectiveness through extensive numerical experiments and a real-world anomaly detection application.

Keywords: pan matrix profile, unnormalized euclidean distance, double-ended queue, discord discovery, anomaly detection

Procedia PDF Downloads 218
8996 GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform

Authors: Md. Enamul Haque, Abdullah Al Kaisan, Mahmudur R. Saniat, Aminur Rahman

Abstract:

In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression.

Keywords: accelerated GPU, CUDA, parallel computing, fractal image compression

Procedia PDF Downloads 304