Search results for: BCR-ABL fusion transcript types
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5971

Search results for: BCR-ABL fusion transcript types

5971 Frequency of BCR-ABL Fusion Transcript Types with Chronic Myeloid Leukemia by Multiplex Polymerase Chain Reaction in Srinagarind Hospital, Khon Kaen Thailand

Authors: Kanokon Chaicom, Chitima Sirijerachai, Kanchana Chansung, Pinsuda Klangsang, Boonpeng Palaeng, Prajuab Chaimanee, Pimjai Ananta

Abstract:

Chronic myeloid leukemia (CML) is characterized by the consistent involvement of the Philadelphia chromosome (Ph), which is derived from a reciprocal translocation between chromosome 9 and 22, the main product of the t(9;22) (q34;q11) translocation, is found in the leukemic clone of at least 95% of CML patients. There are two major forms of the BCR/ABL fusion gene, involving ABL exon 2, but including different exons of BCR gene. The transcripts b2a2 (e13a2) or b3a2 (e14a2) code for a p210 protein. Another fusion gene leads to the expression of an e1a2 transcript, which codes for a p190 protein. Other less common fusion genes are b3a3 or b2a3, which codes for a p203 protein and e19a2 (c3a2) transcript, which codes for a p230 protein. Its frequency varies in different populations. In this study, we aimed to report the frequency of BCR-ABL fusion transcript types with CML by multiplex PCR (polymerase chain reaction) in Srinagarind Hospital, Khon Kaen, Thailand. Multiplex PCR for BCR-ABL was performed on 58 patients, to detect different types of BCR-ABL transcripts of the t (9; 22). All patients examined were positive for some type of BCR/ABL rearrangement. The majority of the patients (93.10%) expressed one of the p210 BCR-ABL transcripts, b3a2 and b2a2 transcripts were detected in 53.45% and 39.65% respectively. The expression of an e1a2 transcript showed 3.75%. Co-expression of p210/p230 was detected in 3.45%. Co-expression of p210/p190 was not detected. Multiplex PCR is useful, saves time and reliable in the detection of BCR-ABL transcript types. The frequency of one or other rearrangement in CML varies in different population.

Keywords: chronic myeloid leukemia, BCR-ABL fusion transcript types, multiplex PCR, frequency of BCR-ABL fusion

Procedia PDF Downloads 244
5970 Clinical Relevance of TMPRSS2-ERG Fusion Marker for Prostate Cancer

Authors: Shalu Jain, Anju Bansal, Anup Kumar, Sunita Saxena

Abstract:

Objectives: The novel TMPRSS2:ERG gene fusion is a common somatic event in prostate cancer that in some studies is linked with a more aggressive disease phenotype. Thus, this study aims to determine whether clinical variables are associated with the presence of TMPRSS2:ERG-fusion gene transcript in Indian patients of prostate cancer. Methods: We evaluated the clinical variables with presence and absence of TMPRSS2:ERG gene fusion in prostate cancer and BPH association of clinical patients. Patients referred for prostate biopsy because of abnormal DRE or/and elevated sPSA were enrolled for this prospective clinical study. TMPRSS2:ERG mRNA copies in samples were quantified using a Taqman chemistry by real time PCR assay in prostate biopsy samples (N=42). The T2:ERG assay detects the gene fusion mRNA isoform TMPRSS2 exon1 to ERG exon4. Results: Histopathology report has confirmed 25 cases as prostate cancer adenocarcinoma (PCa) and 17 patients as benign prostate hyperplasia (BPH). Out of 25 PCa cases, 16 (64%) were T2: ERG fusion positive. All 17 BPH controls were fusion negative. The T2:ERG fusion transcript was exclusively specific for prostate cancer as no case of BPH was detected having T2:ERG fusion, showing 100% specificity. The positive predictive value of fusion marker for prostate cancer is thus 100% and the negative predictive value is 65.3%. The T2:ERG fusion marker is significantly associated with clinical variables like no. of positive cores in prostate biopsy, Gleason score, serum PSA, perineural invasion, perivascular invasion and periprostatic fat involvement. Conclusions: Prostate cancer is a heterogeneous disease that may be defined by molecular subtypes such as the TMPRSS2:ERG fusion. In the present prospective study, the T2:ERG quantitative assay demonstrated high specificity for predicting biopsy outcome; sensitivity was similar to the prevalence of T2:ERG gene fusions in prostate tumors. These data suggest that further improvement in diagnostic accuracy could be achieved using a nomogram that combines T2:ERG with other markers and risk factors for prostate cancer.

Keywords: prostate cancer, genetic rearrangement, TMPRSS2:ERG fusion, clinical variables

Procedia PDF Downloads 444
5969 Variations in the Angulation of the First Sacral Spinous Process Angle Associated with Sacrocaudal Fusion in Greyhounds

Authors: Sa'ad M. Ismail, Hung-Hsun Yen, Christina M. Murray, Helen M. S. Davies

Abstract:

In the dog, the median sacral crest is formed by the fusion of three sacral spinous processes. In greyhounds with standard sacrums, this fusion in the median sacral crest consists of the fusion of three sacral spinous processes while it consists of four in greyhounds with sacrocaudal fusion. In the present study, variations in the angulation of the first sacral spinous process in association with different types of sacrocaudal fusion in the greyhound were investigated. Sacrums were collected from 207 greyhounds (102 sacrums; type A (unfused) and 105 with different types of sacrocaudal fusion; types: B, C and D). Sacrums were cleaned by boiling and dried and then were placed on their ventral surface on a flat surface and photographed from the left side using a digital camera at a fixed distance. The first sacral spinous process angle (1st SPA) was defined as the angle formed between the cranial border of the cranial ridge of the first sacral spinous process and the line extending across the most dorsal surface points of the spinous processes of the S1, S2, and S3. Image-Pro Express Version 5.0 imaging software was used to draw and measure the angles. Two photographs were taken for each sacrum and two repeat measurements were also taken of each angle. The mean value of the 1st SPA in greyhounds with sacrocaudal fusion was less (98.99°, SD ± 11, n = 105) than those in greyhounds with standard sacrums (99.77°, SD ± 9.18, n = 102) but was not significantly different (P < 0.05). Among greyhounds with different types of sacrocaudal fusion the mean value of the 1st SPA was as follows: type B; 97.73°, SD ± 10.94, n = 39, type C: 101.42°, SD ± 10.51, n = 52, and type D: 94.22°, SD ± 11.30, n = 12. For all types of fusion these angles were significantly different from each other (P < 0.05). Comparing the mean value of the1st SPA in standard sacrums (Type A) with that for each type of fusion separately showed that the only significantly different angulation (P < 0.05) was between standard sacrums and sacrums with sacrocaudal fusion sacrum type D (only body fusion between the S1 and Ca1). Different types of sacrocaudal fusion were associated with variations in the angle of the first sacral spinous process. These variations may affect the alignment and biomechanics of the sacral area and the pattern of movement and/or the force produced by both hind limbs to the cranial parts of the body and may alter the loading of other parts of the body. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or surrounding anatomical structures during movement.

Keywords: angulation of first sacral spinous process, biomechanics, greyhound, locomotion, sacrocaudal fusion

Procedia PDF Downloads 313
5968 Variations in the 7th Lumbar (L7) Vertebra Length Associated with Sacrocaudal Fusion in Greyhounds

Authors: Sa`ad M. Ismail, Hung-Hsun Yen, Christina M. Murray, Helen M. S. Davies

Abstract:

The lumbosacral junction (where the 7th lumbar vertebra (L7) articulates with the sacrum) is a clinically important area in the dog. The 7th lumbar vertebra (L7) is normally shorter than other lumbar vertebrae, and it has been reported that variations in the L7 length may be associated with other abnormal anatomical findings. These variations included the reduction or absence of the portion of the median sacral crest. In this study, 53 greyhound cadavers were placed in right lateral recumbency, and two lateral radiographs were taken of the lumbosacral region for each greyhound. The length of the 6th lumbar (L6) vertebra and L7 were measured using radiographic measurement software and was defined to be the mean of three lines drawn from the caudal to the cranial edge of the L6 and L7 vertebrae (a dorsal, middle, and ventral line) between specific landmarks. Sacrocaudal fusion was found in 41.5% of the greyhounds. The mean values of the length of L6, L7, and the ratio of the L6/L7 length of the greyhounds with sacrocaudal fusion were all greater than those with standard sacrums (three sacral vertebrae). There was a significant difference (P < 0.05) in the mean values of the length of L7 between the greyhounds without sacrocaudal fusion (mean = 29.64, SD ± 2.07) and those with sacrocaudal fusion (mean = 30.86, SD ± 1.80), but, there was no significant difference in the mean value of the length of the L6 measurement. Among different types of sacrocaudal fusion, the longest L7 was found in greyhounds with sacrum type D, intermediate length in those with sacrum type B, and the shortest was found in those with sacrums type C, and the mean values of the ratio of the L6/L7 were 1.11 (SD ± 0.043), 1.15, (SD ± 0.025), and 1.15 (SD ± 0.011) for the types B, C, and D respectively. No significant differences in the mean values of the length of L6 or L7 were found among the different types of sacrocaudal fusion. The occurrence of sacrocaudal fusion might affect direct anatomically connected structures such as the L7. The variation in the length of L7 between greyhounds with sacrocaudal fusion and those without may reflect the possible sequences of the process of fusion. Variations in the length of the L7 vertebra in greyhounds may be associated with the occurrence of sacrocaudal fusion. The variation in the vertebral length may affect the alignment and biomechanical properties of the sacrum and may alter the loading. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or the surrounding anatomical structures.

Keywords: biomechanics, Greyhound, sacrocaudal fusion, locomotion, 6th Lumbar (L6) Vertebra, 7th Lumbar (L7) Vertebra, ratio of the L6/L7 length

Procedia PDF Downloads 373
5967 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 209
5966 Changes in the Median Sacral Crest Associated with Sacrocaudal Fusion in the Greyhound

Authors: S. M. Ismail, H-H Yen, C. M. Murray, H. M. S. Davies

Abstract:

A recent study reported a 33% incidence of complete sacrocaudal fusion in greyhounds compared to a 3% incidence in other dogs. In the dog, the median sacral crest is formed by the fusion of sacral spinous processes. Separation of the 1st spinous process from the median crest of the sacrum in the dog has been reported as a diagnostic tool of type one lumbosacral transitional vertebra (LTV). LTV is a congenital spinal anomaly, which includes either sacralization of the caudal lumbar part or lumbarization of the most cranial sacral segment of the spine. In this study, the absence or reduction of fusion (presence of separation) between the 1st and 2ndspinous processes of the median sacral crest has been identified in association with sacrocaudal fusion in the greyhound, without any feature of LTV. In order to provide quantitative data on the absence or reduction of fusion in the median sacral crest between the 1st and 2nd sacral spinous processes, in association with sacrocaudal fusion. 204 dog sacrums free of any pathological changes (192 greyhound, 9 beagles and 3 labradors) were grouped based on the occurrence and types of fusion and the presence, absence, or reduction in the median sacral crest between the 1st and 2nd sacral spinous processes., Sacrums were described and classified as follows: F: Complete fusion (crest is present), N: Absence (fusion is absent), and R: Short crest (fusion reduced but not absent (reduction). The incidence of sacrocaudal fusion in the 204 sacrums: 57% of the sacrums were standard (3 vertebrae) and 43% were fused (4 vertebrae). Type of sacrum had a significant (p < .05) association with the absence and reduction of fusion between the 1st and 2nd sacral spinous processes of the median sacral crest. In the 108 greyhounds with standard sacrums (3 vertebrae) the percentages of F, N and R were 45% 23% and 23% respectively, while in the 84 fused (4 vertebrae) sacrums, the percentages of F, N and R were 3%, 87% and 10% respectively and these percentages were significantly different between standard (3 vertebrae) and fused (4 vertebrae) sacrums (p < .05). This indicates that absence of spinous process fusion in the median sacral crest was found in a large percentage of the greyhounds in this study and was found to be particularly prevalent in those with sacrocaudal fusion – therefore in this breed, at least, absence of sacral spinous process fusion may be unlikely to be associated with LTV.

Keywords: greyhound, median sacral crest, sacrocaudal fusion, sacral spinous process

Procedia PDF Downloads 446
5965 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models

Authors: Sam Khozama, Ali M. Mayya

Abstract:

Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.

Keywords: machine learning, deep learning, cancer prediction, breast cancer, LSTM, fusion

Procedia PDF Downloads 164
5964 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 411
5963 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme

Authors: Andrey V. Timofeev, Dmitry V. Egorov

Abstract:

This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.

Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier

Procedia PDF Downloads 467
5962 Age Determination from Epiphyseal Union of Bones at Shoulder Joint in Girls of Central India

Authors: B. Tirpude, V. Surwade, P. Murkey, P. Wankhade, S. Meena

Abstract:

There is no statistical data to establish variation in epiphyseal fusion in girls in central India population. This significant oversight can lead to exclusion of persons of interest in a forensic investigation. Epiphyseal fusion of proximal end of humerus in eighty females were analyzed on radiological basis to assess the range of variation of epiphyseal fusion at each age. In the study, the X ray films of the subjects were divided into three groups on the basis of degree of fusion. Firstly, those which were showing No Epiphyseal Fusion (N), secondly those showing Partial Union (PC), and thirdly those showing Complete Fusion (C). Observations made were compared with the previous studies.

Keywords: epiphyseal union, shoulder joint, proximal end of humerus

Procedia PDF Downloads 498
5961 Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique

Authors: Manoj Gupta, Nirmendra Singh Bhadauria

Abstract:

Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR.

Keywords: image fusion, DWT, DT-CWT, PSNR, average image fusion, hybrid image fusion

Procedia PDF Downloads 606
5960 Implementation of Sensor Fusion Structure of 9-Axis Sensors on the Multipoint Control Unit

Authors: Jun Gil Ahn, Jong Tae Kim

Abstract:

In this paper, we study the sensor fusion structure on the multipoint control unit (MCU). Sensor fusion using Kalman filter for 9-axis sensors is considered. The 9-axis inertial sensor is the combination of 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We implement the sensor fusion structure among the sensor hubs in MCU and measure the execution time, power consumptions, and total energy. Experiments with real data from 9-axis sensor in 20Mhz show that the average power consumptions are 44mW and 48mW on Cortx-M0 and Cortex-M3 MCU, respectively. Execution times are 613.03 us and 305.6 us respectively.

Keywords: 9-axis sensor, Kalman filter, MCU, sensor fusion

Procedia PDF Downloads 504
5959 Efficient Feature Fusion for Noise Iris in Unconstrained Environment

Authors: Yao-Hong Tsai

Abstract:

This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.

Keywords: image fusion, iris recognition, local binary pattern, wavelet

Procedia PDF Downloads 367
5958 Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion

Authors: Bin Liu, Weijie Liu, Bin Sun, Yihui Luo

Abstract:

In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information.

Keywords: image fusion, two-channel sampled nonseparable wavelets, multispectral image, panchromatic image

Procedia PDF Downloads 440
5957 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 164
5956 Multi-Channel Information Fusion in C-OTDR Monitoring Systems: Various Approaches to Classify of Targeted Events

Authors: Andrey V. Timofeev

Abstract:

The paper presents new results concerning selection of optimal information fusion formula for ensembles of C-OTDR channels. The goal of information fusion is to create an integral classificator designed for effective classification of seismoacoustic target events. The LPBoost (LP-β and LP-B variants), the Multiple Kernel Learning, and Weighing of Inversely as Lipschitz Constants (WILC) approaches were compared. The WILC is a brand new approach to optimal fusion of Lipschitz Classifiers Ensembles. Results of practical usage are presented.

Keywords: Lipschitz Classifier, classifiers ensembles, LPBoost, C-OTDR systems

Procedia PDF Downloads 461
5955 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes

Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali

Abstract:

Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.

Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture

Procedia PDF Downloads 56
5954 Multimodal Data Fusion Techniques in Audiovisual Speech Recognition

Authors: Hadeer M. Sayed, Hesham E. El Deeb, Shereen A. Taie

Abstract:

In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly.

Keywords: multimodal data, data fusion, audio-visual speech recognition, neural networks

Procedia PDF Downloads 114
5953 Mage Fusion Based Eye Tumor Detection

Authors: Ahmed Ashit

Abstract:

Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image.

Keywords: image fusion, eye tumor, canny operators, superimposed

Procedia PDF Downloads 365
5952 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion

Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao

Abstract:

Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.

Keywords: image classification, decision fusion, multi-temporal, remote sensing

Procedia PDF Downloads 125
5951 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education

Procedia PDF Downloads 132
5950 A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment

Authors: M. Prema Kumar, P. Rajesh Kumar

Abstract:

The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy.

Keywords: multi sensor image fusion, MSVD, image processing, monochrome video

Procedia PDF Downloads 573
5949 Multi-Focus Image Fusion Using SFM and Wavelet Packet

Authors: Somkait Udomhunsakul

Abstract:

In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.

Keywords: multi-focus image fusion, wavelet packet, spatial frequency measurement

Procedia PDF Downloads 474
5948 Biimodal Biometrics System Using Fusion of Iris and Fingerprint

Authors: Attallah Bilal, Hendel Fatiha

Abstract:

This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.

Keywords: iris, fingerprint, sum rule, fusion

Procedia PDF Downloads 370
5947 Quantom Magnetic Effects of P-B Fusion in Plasma Focus Devices

Authors: M. Habibi

Abstract:

The feasibility of proton-boron fusion in plasmoids caused by magneto hydrodynamics instabilities in plasma focus devices is studied analytically. In plasmoids, fusion power for 76 keV < Ti < 1500 keV exceeds bremsstrahlung loss (W/Pb=5.39). In such situation gain factor and the ratio of Te to Ti for a typical 150 kJ plasma focus device will be 7.8 and 4.8 respectively. Also with considering the ion viscous heating effect, W/Pb and Ti/Te will be 2.7 and 6 respectively. Strong magnetic field will reduces ion-electron collision rate due to quantization of electron orbits. While approximately there is no change in electron-ion collision rate, the effect of quantum magnetic field makes ions much hotter than electrons which enhance the fraction of fusion power to bremsstrahlung loss. Therefore self-sustained p-11B fusion reactions would be possible and it could be said that p-11B fuelled plasma focus device is a clean and efficient source of energy.

Keywords: plasmoids, p11B fuel, ion viscous heating, quantum magnetic field, plasma focus device

Procedia PDF Downloads 465
5946 Comparative Analysis of Hybrid Dynamic Stabilization and Fusion for Degenerative Disease of the Lumbosacral Spine: Finite Element Analysis

Authors: Mohamed Bendoukha, Mustapha Mosbah

Abstract:

The Radiographic apparent assumed that the asymptomatic adjacent segment disease ASD is common after lumbar fusion, but this does not correlate with the functional outcomes while compensatory increased motion and stresses at the adjacent level of fusion is well-known to be associated to ASD. Newly developed, the hybrid stabilization are allocated to substituted for mostly the superior level of the fusion in an attempt to reduce the number of fusion levels and likelihood of degeneration process at the adjacent levels during the fusion with pedicle screws. Nevertheless, its biomechanical efficiencies still remain unknown and complications associated with failure of constructs such screw loosening and toggling should be elucidated In the current study, a finite element (FE) study was performed using a validated L2/S1 model subjected to a moment of 7.5 Nm and follower load of 400 N to assess the biomedical behavior of hybrid constructs based on dynamic topping off, semi rigid fusion. The residual range of motion (ROM), stress distribution at the fused and adjacent levels, stress distribution at the disc and the cage-endplate interface with respect to changes of bone quality were investigated. The hybrid instrumentation was associated with a reduction in compressive stresses compared to the fusion construct in the adjacent-level disc and showed high substantial axial force in the implant while fusion instrumentation increased the motion for both flexion and extension.

Keywords: intervertebral disc, lumbar spine, degenerative nuclesion, L4-L5, range of motion finite element model, hyperelasticy

Procedia PDF Downloads 185
5945 Adaptive Dehazing Using Fusion Strategy

Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha

Abstract:

The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.

Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map

Procedia PDF Downloads 465
5944 Dual Biometrics Fusion Based Recognition System

Authors: Prakash, Vikash Kumar, Vinay Bansal, L. N. Das

Abstract:

Dual biometrics is a subpart of multimodal biometrics, which refers to the use of a variety of modalities to identify and authenticate persons rather than just one. We limit the risks of mistakes by mixing several modals, and hackers have a tiny possibility of collecting information. Our goal is to collect the precise characteristics of iris and palmprint, produce a fusion of both methodologies, and ensure that authentication is only successful when the biometrics match a particular user. After combining different modalities, we created an effective strategy with a mean DI and EER of 2.41 and 5.21, respectively. A biometric system has been proposed.

Keywords: multimodal, fusion, palmprint, Iris, EER, DI

Procedia PDF Downloads 147
5943 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 115
5942 Theoretical Investigation of Proton-Bore Fusion in Hot Spots

Authors: Morteza Habibi

Abstract:

As an alternative to D–T fuel, one can consider advanced fuels like D3-He and p-11B fuels, which have potential advantages concerning availability and/or environmental impact. Hot spots are micron-sized magnetically self-contained sources observed in pinched plasma devices. In hot spots, fusion power for 120 keV < Ti < 800 keV and 32 keV < Te < 129 keV exceeds bremsstrahlung loss and fraction of fusion power to bremsstrahlung loss reaches to 1.9. In this case, gain factor for a 150 kJ typical pulsed generator as a hot spot source will be 7.8 which is considerable for a commercial pinched plasma device.

Keywords: P-B fuel, hot spot, bremmsstrahlung loss, ion temperature

Procedia PDF Downloads 528