Search results for: clustering images
2297 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 2852296 Identification of Watershed Landscape Character Types in Middle Yangtze River within Wuhan Metropolitan Area
Authors: Huijie Wang, Bin Zhang
Abstract:
In China, the middle reaches of the Yangtze River are well-developed, boasting a wealth of different types of watershed landscape. In this regard, landscape character assessment (LCA) can serve as a basis for protection, management and planning of trans-regional watershed landscape types. For this study, we chose the middle reaches of the Yangtze River in Wuhan metropolitan area as our study site, wherein the water system consists of rich variety in landscape types. We analyzed trans-regional data to cluster and identify types of landscape characteristics at two levels. 55 basins were analyzed as variables with topography, land cover and river system features in order to identify the watershed landscape character types. For watershed landscape, drainage density and degree of curvature were specified as special variables to directly reflect the regional differences of river system features. Then, we used the principal component analysis (PCA) method and hierarchical clustering algorithm based on the geographic information system (GIS) and statistical products and services solution (SPSS) to obtain results for clusters of watershed landscape which were divided into 8 characteristic groups. These groups highlighted watershed landscape characteristics of different river systems as well as key landscape characteristics that can serve as a basis for targeted protection of watershed landscape characteristics, thus helping to rationally develop multi-value landscape resources and promote coordinated development of trans-regions.Keywords: GIS, hierarchical clustering, landscape character, landscape typology, principal component analysis, watershed
Procedia PDF Downloads 2282295 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 342294 Fruit Identification System in Sweet Orange Citrus (L.) Osbeck Using Thermal Imaging and Fuzzy
Authors: Ingrid Argote, John Archila, Marcelo Becker
Abstract:
In agriculture, intelligent systems applications have generated great advances in automating some of the processes in the production chain. In order to improve the efficiency of those systems is proposed a vision system to estimate the amount of fruits in sweet orange trees. This work presents a system proposal using capture of thermal images and fuzzy logic. A bibliographical review has been done to analyze the state-of-the-art of the different systems used in fruit recognition, and also the different applications of thermography in agricultural systems. The algorithm developed for this project uses the metrics of the fuzzines parameter to the contrast improvement and segmentation of the image, for the counting algorith m was used the Hough transform. In order to validate the proposed algorithm was created a bank of images of sweet orange Citrus (L.) Osbeck acquired in the Maringá Farm. The tests with the algorithm Indicated that the variation of the tree branch temperature and the fruit is not very high, Which makes the process of image segmentation using this differentiates, This Increases the amount of false positives in the fruit counting algorithm. Recognition of fruits isolated with the proposed algorithm present an overall accuracy of 90.5 % and grouped fruits. The accuracy was 81.3 %. The experiments show the need for a more suitable hardware to have a better recognition of small temperature changes in the image.Keywords: Agricultural systems, Citrus, Fuzzy logic, Thermal images.
Procedia PDF Downloads 2292293 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 882292 Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches
Authors: Gaokai Liu
Abstract:
Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets.Keywords: deep learning, defect detection, image segmentation, nanomaterials
Procedia PDF Downloads 1492291 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 1212290 Comparative Study of Tensile Properties of Cast and Hot Forged Alumina Nanoparticle Reinforced Composites
Authors: S. Ghanaraja, Subrata Ray, S. K. Nath
Abstract:
Particle reinforced Metal Matrix Composite (MMC) succeeds in synergizing the metallic matrix with ceramic particle reinforcements to result in improved strength, particularly at elevated temperatures, but adversely it affects the ductility of the matrix because of agglomeration and porosity. The present study investigates the outcome of tensile properties in a cast and hot forged composite reinforced simultaneously with coarse and fine particles. Nano-sized alumina particles have been generated by milling mixture of aluminum and manganese dioxide powders. Milled particles after drying are added to molten metal and the resulting slurry is cast. The microstructure of the composites shows good distribution of both the size categories of particles without significant clustering. The presence of nanoparticles along with coarser particles in a composite improves both strength and ductility considerably. Delay in debonding of coarser particles to higher stress is due to reduced mismatch in extension caused by increased strain hardening in presence of the nanoparticles. However, higher addition of powder mix beyond a limit results in deterioration of mechanical properties, possibly due to clustering of nanoparticles. The porosity in cast composite generally increases with the increasing addition of powder mix as observed during process and on forging it has got reduced. The base alloy and nanocomposites show improvement in flow stress which could be attributed to lowering of porosity and grain refinement as a consequence of forging.Keywords: aluminium, alumina, nano-particle reinforced composites, porosity
Procedia PDF Downloads 2492289 A Combination of Anisotropic Diffusion and Sobel Operator to Enhance the Performance of the Morphological Component Analysis for Automatic Crack Detection
Authors: Ankur Dixit, Hiroaki Wagatsuma
Abstract:
The crack detection on a concrete bridge is an important and constant task in civil engineering. Chronically, humans are checking the bridge for inspection of cracks to maintain the quality and reliability of bridge. But this process is very long and costly. To overcome such limitations, we have used a drone with a digital camera, which took some images of bridge deck and these images are processed by morphological component analysis (MCA). MCA technique is a very strong application of sparse coding and it explores the possibility of separation of images. In this paper, MCA has been used to decompose the image into coarse and fine components with the effectiveness of two dictionaries namely anisotropic diffusion and wavelet transform. An anisotropic diffusion is an adaptive smoothing process used to adjust diffusion coefficient by finding gray level and gradient as features. These cracks in image are enhanced by subtracting the diffused coarse image into the original image and the results are treated by Sobel edge detector and binary filtering to exhibit the cracks in a fine way. Our results demonstrated that proposed MCA framework using anisotropic diffusion followed by Sobel operator and binary filtering may contribute to an automation of crack detection even in open field sever conditions such as bridge decks.Keywords: anisotropic diffusion, coarse component, fine component, MCA, Sobel edge detector and wavelet transform
Procedia PDF Downloads 1732288 Principle Component Analysis on Colon Cancer Detection
Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti
Abstract:
Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis
Procedia PDF Downloads 2052287 Neuron Imaging in Lateral Geniculate Nucleus
Authors: Sandy Bao, Yankang Bao
Abstract:
The understanding of information that is being processed in the brain, especially in the lateral geniculate nucleus (LGN), has been proven challenging for modern neuroscience and for researchers with a focus on how neurons process signals and images. In this paper, we are proposing a method to image process different colors within different layers of LGN, that is, green information in layers 4 & 6 and red & blue in layers 3 & 5 based on the surface dimension of layers. We take into consideration the images in LGN and visual cortex, and that the edge detected information from the visual cortex needs to be considered in order to return back to the layers of LGN, along with the image in LGN to form the new image, which will provide an improved image that is clearer, sharper, and making it easier to identify objects in the image. Matrix Laboratory (MATLAB) simulation is performed, and results show that the clarity of the output image has significant improvement.Keywords: lateral geniculate nucleus, matrix laboratory, neuroscience, visual cortex
Procedia PDF Downloads 2792286 Contribution of Remote Sensing and GIS to the Study of the Impact of the Salinity of Sebkhas on the Quality of Groundwater: Case of Sebkhet Halk El Menjel (Sousse)
Authors: Gannouni Sonia, Hammami Asma, Saidi Salwa, Rebai Noamen
Abstract:
Water resources in Tunisia have experienced quantitative and qualitative degradation, especially when talking about wetlands and Sbekhas. Indeed, the objective of this work is to study the spatio-temporal evolution of salinity for 29 years (from 1987 to 2016). A study of the connection between surface water and groundwater is necessary to know the degree of influence of the Sebkha brines on the water table. The evolution of surface salinity is determined by remote sensing based on Landsat TM and OLI/TIRS satellite images of the years 1987, 2007, 2010, and 2016. The processing of these images allowed us to determine the NDVI(Normalized Difference Vegetation Index), the salinity index, and the surface temperature around Sebkha. In addition, through a geographic information system(GIS), we could establish a map of the distribution of salinity in the subsurface of the water table of Chott Mariem and Hergla/SidiBouAli/Kondar. The results of image processing and the calculation of the index and surface temperature show an increase in salinity downstream of in addition to the sebkha and the development of vegetation cover upstream and the western part of the sebkha. This richness may be due both to contamination by seawater infiltration from the barrier beach of Hergla as well as the passage of groundwater to the sebkha.Keywords: spatio-temporal monitoring, salinity, satellite images, NDVI, sebkha
Procedia PDF Downloads 1332285 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 1502284 SEMCPRA-Sar-Esembled Model for Climate Prediction in Remote Area
Authors: Kamalpreet Kaur, Renu Dhir
Abstract:
Climate prediction is an essential component of climate research, which helps evaluate possible effects on economies, communities, and ecosystems. Climate prediction involves short-term weather prediction, seasonal prediction, and long-term climate change prediction. Climate prediction can use the information gathered from satellites, ground-based stations, and ocean buoys, among other sources. The paper's four architectures, such as ResNet50, VGG19, Inception-v3, and Xception, have been combined using an ensemble approach for overall performance and robustness. An ensemble of different models makes a prediction, and the majority vote determines the final prediction. The various architectures such as ResNet50, VGG19, Inception-v3, and Xception efficiently classify the dataset RSI-CB256, which contains satellite images into cloudy and non-cloudy. The generated ensembled S-E model (Sar-ensembled model) provides an accuracy of 99.25%.Keywords: climate, satellite images, prediction, classification
Procedia PDF Downloads 742283 Radiologic Assessment of Orbital Dimensions Among Omani Subjects: Computed Tomography Imaging-Based Study
Authors: Marwa Al-Subhi, Eiman Al-Ajmi, Mallak Al-Maamari, Humood Al-Dhuhli, Srinivasa Rao
Abstract:
The orbit and its contents are affected by various pathologies and craniofacial anomalies. Sound knowledge of the normal orbital dimensions is clinically essential for successful surgical outcomes and also in the field of forensic anthropology. Racial, ethnic, and regional variations in the orbital dimensions have been reported. This study sought to determine the orbital dimensions of Omani subjects who had been referred for computed tomography (CT) images at a tertiary care hospital. A total of 273 patients’ CT images were evaluated retrospectively by using an electronic medical records database. The orbital dimensions were recorded using both axial and sagittal planes of CT images. The mean orbital index (OI) was found to be 83.25±4.83 and the prevalent orbital type was categorized as mesoseme. The mean orbital index was 83.34±5.05 and 83.16±4.57 in males and females, respectively, with their difference being statistically not significant (p=0.76). A statistically significant association was observed between the right and left orbits with regard to horizontal distance (p<0.05) and vertical distance (p<0.01) of orbit and OI (p<0.05). No significant difference between the OI and age groups was observed in both males and females. The mean interorbital distance and interzygomatic distance were found to be 19.45±1.52 mm and 95.59±4.08 mm, respectively. Both of these parameters were significantly higher in males (p<0.05). Results of the present study provide reference values of orbital dimensions in Omani subjects. The prevalent orbital type of Omani subjects is mesoseme, which is a hallmark of the white race.Keywords: orbit, orbital index, mesoseme, ethnicity, variation
Procedia PDF Downloads 1502282 Hybrid EMPCA-Scott Approach for Estimating Probability Distributions of Mutual Information
Authors: Thuvanan Borvornvitchotikarn, Werasak Kurutach
Abstract:
Mutual information (MI) is widely used in medical image registration. In the different medical images analysis, it is difficult to choose an optimal bins size number for calculating the probability distributions in MI. As the result, this paper presents a new adaptive bins number selection approach that named a hybrid EMPCA-Scott approach. This work combines an expectation maximization principal component analysis (EMPCA) and the modified Scott’s rule. The proposed approach solves the binning problem from the various intensity values in medical images. Experimental results of this work show the lower registration errors compared to other adaptive binning approaches.Keywords: mutual information, EMPCA, Scott, probability distributions
Procedia PDF Downloads 2492281 Localization of Mobile Robots with Omnidirectional Cameras
Authors: Tatsuya Kato, Masanobu Nagata, Hidetoshi Nakashima, Kazunori Matsuo
Abstract:
Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters.Keywords: mobile robots, localization, omnidirectional camera, estimating positions
Procedia PDF Downloads 4422280 Predicting the Human Impact of Natural Onset Disasters Using Pattern Recognition Techniques and Rule Based Clustering
Authors: Sara Hasani
Abstract:
This research focuses on natural sudden onset disasters characterised as ‘occurring with little or no warning and often cause excessive injuries far surpassing the national response capacities’. Based on the panel analysis of the historic record of 4,252 natural onset disasters between 1980 to 2015, a predictive method was developed to predict the human impact of the disaster (fatality, injured, homeless) with less than 3% of errors. The geographical dispersion of the disasters includes every country where the data were available and cross-examined from various humanitarian sources. The records were then filtered into 4252 records of the disasters where the five predictive variables (disaster type, HDI, DRI, population, and population density) were clearly stated. The procedure was designed based on a combination of pattern recognition techniques and rule-based clustering for prediction and discrimination analysis to validate the results further. The result indicates that there is a relationship between the disaster human impact and the five socio-economic characteristics of the affected country mentioned above. As a result, a framework was put forward, which could predict the disaster’s human impact based on their severity rank in the early hours of disaster strike. The predictions in this model were outlined in two worst and best-case scenarios, which respectively inform the lower range and higher range of the prediction. A necessity to develop the predictive framework can be highlighted by noticing that despite the existing research in literature, a framework for predicting the human impact and estimating the needs at the time of the disaster is yet to be developed. This can further be used to allocate the resources at the response phase of the disaster where the data is scarce.Keywords: disaster management, natural disaster, pattern recognition, prediction
Procedia PDF Downloads 1532279 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization
Authors: M. Dhana Lakshmi, S. Sakthivel Murugan
Abstract:
As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter
Procedia PDF Downloads 1942278 Classification of Regional Innovation Types and Region-Based Innovation Policies
Authors: Seongho Han, Dongkwan Kim
Abstract:
The focus of regional innovation policies is shifting from a central government to local governments. The central government demands that regions enforce autonomous and responsible regional innovation policies and that regional governments seek for innovation policies fit for regional characteristics. However, the central government and local governments have not arrived yet at a conclusion on what innovation policies are appropriate for regional circumstances. In particular, even if each local government is trying to find regional innovation strategies that are based on the needs of a region, its innovation strategies turn out to be similar with those of other regions. This leads to a consequence that is inefficient not only at a national level, but also at a regional level. Existing researches on regional innovation types point out that there are remarkable differences in the types or characteristics of innovation among the regions of a nation. In addition they imply that there would be no expected innovation output in cases in which policies are enforced with ignoring such differences. This means that it is undesirable to enforce regional innovation policies under a single standard. This research, given this problem, aims to find out the characteristics and differences in innovation types among the regions in Korea and suggests appropriate policy implications by classifying such characteristics and differences. This research, given these objectives, classified regions in consideration of the various indicators that comprise the innovation suggested by existing related researches and illustrated policies based on such characteristics and differences. This research used recent data, mainly from 2012, and as a methodology, clustering analysis based on multiple factor analysis was applied. Supplementary researches on dynamically analyzing stability in regional innovation types, establishing systematic indicators based on the regional innovation theory, and developing additional indicators are necessary in the future.Keywords: regional innovation policy, regional innovation type, region-based innovation, multiple factor analysis, clustering analysis
Procedia PDF Downloads 4752277 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks
Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan
Abstract:
A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.Keywords: prostate, deep neural network, seed implant, ultrasound
Procedia PDF Downloads 1982276 Deployment of Matrix Transpose in Digital Image Encryption
Authors: Okike Benjamin, Garba E J. D.
Abstract:
Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt.Keywords: image encryption, matrices, pixel, matrix transpose
Procedia PDF Downloads 4212275 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images
Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou
Abstract:
This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning
Procedia PDF Downloads 1272274 Sign Language Recognition of Static Gestures Using Kinect™ and Convolutional Neural Networks
Authors: Rohit Semwal, Shivam Arora, Saurav, Sangita Roy
Abstract:
This work proposes a supervised framework with deep convolutional neural networks (CNNs) for vision-based sign language recognition of static gestures. Our approach addresses the acquisition and segmentation of correct inputs for the CNN-based classifier. Microsoft Kinect™ sensor, despite complex environmental conditions, can track hands efficiently. Skin Colour based segmentation is applied on cropped images of hands in different poses, used to depict different sign language gestures. The segmented hand images are used as an input for our classifier. The CNN classifier proposed in the paper is able to classify the input images with a high degree of accuracy. The system was trained and tested on 39 static sign language gestures, including 26 letters of the alphabet and 13 commonly used words. This paper includes a problem definition for building the proposed system, which acts as a sign language translator between deaf/mute and the rest of the society. It is then followed by a focus on reviewing existing knowledge in the area and work done by other researchers. It also describes the working principles behind different components of CNNs in brief. The architecture and system design specifications of the proposed system are discussed in the subsequent sections of the paper to give the reader a clear picture of the system in terms of the capability required. The design then gives the top-level details of how the proposed system meets the requirements.Keywords: sign language, CNN, HCI, segmentation
Procedia PDF Downloads 1572273 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 2162272 Scintigraphic Image Coding of Region of Interest Based on SPIHT Algorithm Using Global Thresholding and Huffman Coding
Authors: A. Seddiki, M. Djebbouri, D. Guerchi
Abstract:
Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.Keywords: global thresholding transform, huffman coding, region of interest, SPIHT coding, scintigraphic images
Procedia PDF Downloads 3672271 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model
Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma
Procedia PDF Downloads 812270 New Approaches for the Handwritten Digit Image Features Extraction for Recognition
Authors: U. Ravi Babu, Mohd Mastan
Abstract:
The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.Keywords: handwritten digit recognition, distance measure, MNIST database, image features
Procedia PDF Downloads 4612269 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm
Authors: Ping Bo, Meng Yunshan
Abstract:
Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter
Procedia PDF Downloads 3242268 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique
Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim
Abstract:
This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic
Procedia PDF Downloads 117