Search results for: hyperspectral image classification using tree search algorithm
9146 Applying Unmanned Aerial Vehicle on Agricultural Damage: A Case Study of the Meteorological Disaster on Taiwan Paddy Rice
Authors: Chiling Chen, Chiaoying Chou, Siyang Wu
Abstract:
Taiwan locates at the west of Pacific Ocean and intersects between continental and marine climate. Typhoons frequently strike Taiwan and come with meteorological disasters, i.e., heavy flooding, landslides, loss of life and properties, etc. Global climate change brings more extremely meteorological disasters. So, develop techniques to improve disaster prevention and mitigation is needed, to improve rescue processes and rehabilitations is important as well. In this study, UAVs (Unmanned Aerial Vehicles) are applied to take instant images for improving the disaster investigation and rescue processes. Paddy rice fields in the central Taiwan are the study area. There have been attacked by heavy rain during the monsoon season in June 2016. UAV images provide the high ground resolution (3.5cm) with 3D Point Clouds to develop image discrimination techniques and digital surface model (DSM) on rice lodging. Firstly, image supervised classification with Maximum Likelihood Method (MLD) is used to delineate the area of rice lodging. Secondly, 3D point clouds generated by Pix4D Mapper are used to develop DSM for classifying the lodging levels of paddy rice. As results, discriminate accuracy of rice lodging is 85% by image supervised classification, and the classification accuracy of lodging level is 87% by DSM. Therefore, UAVs not only provide instant images of agricultural damage after the meteorological disaster, but the image discriminations on rice lodging also reach acceptable accuracy (>85%). In the future, technologies of UAVs and image discrimination will be applied to different crop fields. The results of image discrimination will be overlapped with administrative boundaries of paddy rice, to establish GIS-based assist system on agricultural damage discrimination. Therefore, the time and labor would be greatly reduced on damage detection and monitoring.Keywords: Monsoon, supervised classification, Pix4D, 3D point clouds, discriminate accuracy
Procedia PDF Downloads 3019145 New Approaches for the Handwritten Digit Image Features Extraction for Recognition
Authors: U. Ravi Babu, Mohd Mastan
Abstract:
The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.Keywords: handwritten digit recognition, distance measure, MNIST database, image features
Procedia PDF Downloads 4629144 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 5209143 Unconventional Dating of Old Peepal Tree of Chandigarh (India) Using Optically Stimulated Luminescence
Authors: Rita Rani, Ramesh Kumar
Abstract:
The intend of the current study is to date an old grand Peepal tree that is still alive. The tree is situated in Kalibard village, Sector 9, Chandigarh (India). Due to its huge structure, it has got the status of ‘Heritage tree.’ Optically Stimulated Luminescence of sediments beneath the roots is used to determine the age of the tree. Optical dating is preferred over conventional dating methods due to more precession. The methodology includes OSL of quartz grain using SAR protocol for accumulated dose measurement. The age determination of an alive tree using sedimentary quartz is in close agreement with the approximated age provided by the related agency. This is the first attempt at using optically stimulated luminescence in the age determination of alive trees in this region. The study concludes that the Luminescence dating of alive trees is the nondestructive and more precise method.Keywords: luminescence, dose rate, optical dating, sediments
Procedia PDF Downloads 1769142 Fuzzy Approach for Fault Tree Analysis of Water Tube Boiler
Authors: Syed Ahzam Tariq, Atharva Modi
Abstract:
This paper presents a probabilistic analysis of the safety of water tube boilers using fault tree analysis (FTA). A fault tree has been constructed by considering all possible areas where a malfunction could lead to a boiler accident. Boiler accidents are relatively rare, causing a scarcity of data. The fuzzy approach is employed to perform a quantitative analysis, wherein theories of fuzzy logic are employed in conjunction with expert elicitation to calculate failure probabilities. The Fuzzy Fault Tree Analysis (FFTA) provides a scientific and contingent method to forecast and prevent accidents.Keywords: fault tree analysis water tube boiler, fuzzy probability score, failure probability
Procedia PDF Downloads 1289141 Classification Rule Discovery by Using Parallel Ant Colony Optimization
Authors: Waseem Shahzad, Ayesha Tahir Khan, Hamid Hussain Awan
Abstract:
Ant-Miner algorithm that lies under ACO algorithms is used to extract knowledge from data in the form of rules. A variant of Ant-Miner algorithm named as cAnt-MinerPB is used to generate list of rules using pittsburgh approach in order to maintain the rule interaction among the rules that are generated. In this paper, we propose a parallel Ant MinerPB in which Ant colony optimization algorithm runs parallel. In this technique, a data set is divided vertically (i-e attributes) into different subsets. These subsets are created based on the correlation among attributes using Mutual Information (MI). It generates rules in a parallel manner and then merged to form a final list of rules. The results have shown that the proposed technique achieved higher accuracy when compared with original cAnt-MinerPB and also the execution time has also reduced.Keywords: ant colony optimization, parallel Ant-MinerPB, vertical partitioning, classification rule discovery
Procedia PDF Downloads 2969140 Brainbow Image Segmentation Using Bayesian Sequential Partitioning
Authors: Yayun Hsu, Henry Horng-Shing Lu
Abstract:
This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate cross talk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds since biological information is inherently included in the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons.Keywords: brainbow, 3D imaging, image segmentation, neuron morphology, biological data mining, non-parametric learning
Procedia PDF Downloads 4879139 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 2519138 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion
Authors: Adnan A. Y. Mustafa
Abstract:
Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping
Procedia PDF Downloads 1559137 Memetic Algorithm for Solving the One-To-One Shortest Path Problem
Authors: Omar Dib, Alexandre Caminada, Marie-Ange Manier
Abstract:
The purpose of this study is to introduce a novel approach to solve the one-to-one shortest path problem. A directed connected graph is assumed in which all edges’ weights are positive. Our method is based on a memetic algorithm in which we combine a genetic algorithm (GA) and a variable neighborhood search method (VNS). We compare our approximate method with two exact algorithms Dijkstra and Integer Programming (IP). We made experimentations using random generated, complete and real graph instances. In most case studies, numerical results show that our method outperforms exact methods with 5% average gap to the optimality. Our algorithm’s average speed is 20-times faster than Dijkstra and more than 1000-times compared to IP. The details of the experimental results are also discussed and presented in the paper.Keywords: shortest path problem, Dijkstra’s algorithm, integer programming, memetic algorithm
Procedia PDF Downloads 4679136 A Conglomerate of Multiple Optical Character Recognition Table Detection and Extraction
Authors: Smita Pallavi, Raj Ratn Pranesh, Sumit Kumar
Abstract:
Information representation as tables is compact and concise method that eases searching, indexing, and storage requirements. Extracting and cloning tables from parsable documents is easier and widely used; however, industry still faces challenges in detecting and extracting tables from OCR (Optical Character Recognition) documents or images. This paper proposes an algorithm that detects and extracts multiple tables from OCR document. The algorithm uses a combination of image processing techniques, text recognition, and procedural coding to identify distinct tables in the same image and map the text to appropriate the corresponding cell in dataframe, which can be stored as comma-separated values, database, excel, and multiple other usable formats.Keywords: table extraction, optical character recognition, image processing, text extraction, morphological transformation
Procedia PDF Downloads 1459135 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning
Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul
Abstract:
In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.Keywords: electrocardiogram, dictionary learning, sparse coding, classification
Procedia PDF Downloads 3869134 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 1779133 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods
Authors: Jularat Chumnaul
Abstract:
In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.Keywords: skeletal measurements, classification, cluster, apparent error rate
Procedia PDF Downloads 2529132 Discretization of Cuckoo Optimization Algorithm for Solving Quadratic Assignment Problems
Authors: Elham Kazemi
Abstract:
Quadratic Assignment Problem (QAP) is one the combinatorial optimization problems about which research has been done in many companies for allocating some facilities to some locations. The issue of particular importance in this process is the costs of this allocation and the attempt in this problem is to minimize this group of costs. Since the QAP’s are from NP-hard problem, they cannot be solved by exact solution methods. Cuckoo Optimization Algorithm is a Meta-heuristicmethod which has higher capability to find the global optimal points. It is an algorithm which is basically raised to search a continuous space. The Quadratic Assignment Problem is the issue which can be solved in the discrete space, thus the standard arithmetic operators of Cuckoo Optimization Algorithm need to be redefined on the discrete space in order to apply the Cuckoo Optimization Algorithm on the discrete searching space. This paper represents the way of discretizing the Cuckoo optimization algorithm for solving the quadratic assignment problem.Keywords: Quadratic Assignment Problem (QAP), Discrete Cuckoo Optimization Algorithm (DCOA), meta-heuristic algorithms, optimization algorithms
Procedia PDF Downloads 5179131 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking
Authors: Zayar Phyo, Ei Chaw Htoon
Abstract:
In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel
Procedia PDF Downloads 1529130 Count of Trees in East Africa with Deep Learning
Authors: Nubwimana Rachel, Mugabowindekwe Maurice
Abstract:
Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization
Procedia PDF Downloads 779129 Using Single Decision Tree to Assess the Impact of Cutting Conditions on Vibration
Authors: S. Ghorbani, N. I. Polushin
Abstract:
Vibration during machining process is crucial since it affects cutting tool, machine, and workpiece leading to a tool wear, tool breakage, and an unacceptable surface roughness. This paper applies a nonparametric statistical method, single decision tree (SDT), to identify factors affecting on vibration in machining process. Workpiece material (AISI 1045 Steel, AA2024 Aluminum alloy, A48-class30 Gray Cast Iron), cutting tool (conventional, cutting tool with holes in toolholder, cutting tool filled up with epoxy-granite), tool overhang (41-65 mm), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev) and depth of cut (0.05-0.15 mm) were used as input variables, while vibration was the output parameter. It is concluded that workpiece material is the most important parameters for natural frequency followed by cutting tool and overhang.Keywords: cutting condition, vibration, natural frequency, decision tree, CART algorithm
Procedia PDF Downloads 3379128 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification
Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro
Abstract:
Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification
Procedia PDF Downloads 1189127 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 1159126 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 1709125 Sensor Network Routing Optimization by Simulating Eurygaster Life in Wheat Farms
Authors: Fariborz Ahmadi, Hamid Salehi, Khosrow Karimi
Abstract:
A sensor network is set of sensor nodes that cooperate together to perform a predefined tasks. The important problem in this network is power consumption. So, in this paper one algorithm based on the eurygaster life is introduced to minimize power consumption by the nodes of these networks. In this method the search space of problem is divided into several partitions and each partition is investigated separately. The evaluation results show that our approach is more efficient in comparison to other evolutionary algorithm like genetic algorithm.Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, sensor network optimization
Procedia PDF Downloads 4319124 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 2989123 Design and Implementation of Image Super-Resolution for Myocardial Image
Authors: M. V. Chidananda Murthy, M. Z. Kurian, H. S. Guruprasad
Abstract:
Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality.Keywords: image dictionary creation, image super-resolution, LGE images, patch extraction
Procedia PDF Downloads 3759122 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork
Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting
Abstract:
This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark
Procedia PDF Downloads 2699121 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm
Procedia PDF Downloads 4419120 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification
Authors: Abdelhadi Lotfi, Abdelkader Benyettou
Abstract:
In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.Keywords: classification, probabilistic neural networks, network optimization, pattern recognition
Procedia PDF Downloads 2659119 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 229118 Travel Planning in Public Transport Networks Applying the Algorithm A* for Metropolitan District of Quito
Authors: M. Fernanda Salgado, Alfonso Tierra, Wilbert Aguilar
Abstract:
The present project consists in applying the informed search algorithm A star (A*) to solve traveler problems, applying it by urban public transportation routes. The digitization of the information allowed to identify 26% of the total of routes that are registered within the Metropolitan District of Quito. For the validation of this information, data were taken in field on the travel times and the difference with respect to the times estimated by the program, resulting in that the difference between them was not greater than 2:20 minutes. We validate A* algorithm with the Dijkstra algorithm, comparing nodes vectors based on the public transport stops, the validation was established through the student t-test hypothesis. Then we verified that the times estimated by the program using the A* algorithm are similar to those registered on field. Furthermore, we review the performance of the algorithm generating iterations in both algorithms. Finally, with these iterations, a hypothesis test was carried out again with student t-test where it was concluded that the iterations of the base algorithm Dijsktra are greater than those generated by the algorithm A*.Keywords: algorithm A*, graph, mobility, public transport, travel planning, routes
Procedia PDF Downloads 2419117 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases
Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha
Abstract:
Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.Keywords: feature fusion, image retrieval, membership function, normalization
Procedia PDF Downloads 346