Search results for: ‎convolution coding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 672

Search results for: ‎convolution coding

642 Whole Exome Sequencing Data Analysis of Rare Diseases: Non-Coding Variants and Copy Number Variations

Authors: S. Fahiminiya, J. Nadaf, F. Rauch, L. Jerome-Majewska, J. Majewski

Abstract:

Background: Sequencing of protein coding regions of human genome (Whole Exome Sequencing; WES), has demonstrated a great success in the identification of causal mutations for several rare genetic disorders in human. Generally, most of WES studies have focused on rare variants in coding exons and splicing-sites where missense substitutions lead to the alternation of protein product. Although focusing on this category of variants has revealed the mystery behind many inherited genetic diseases in recent years, a subset of them remained still inconclusive. Here, we present the result of our WES studies where analyzing only rare variants in coding regions was not conclusive but further investigation revealed the involvement of non-coding variants and copy number variations (CNV) in etiology of the diseases. Methods: Whole exome sequencing was performed using our standard protocols at Genome Quebec Innovation Center, Montreal, Canada. All bioinformatics analyses were done using in-house WES pipeline. Results: To date, we successfully identified several disease causing mutations within gene coding regions (e.g. SCARF2: Van den Ende-Gupta syndrome and SNAP29: 22q11.2 deletion syndrome) by using WES. In addition, we showed that variants in non-coding regions and CNV have also important value and should not be ignored and/or filtered out along the way of bioinformatics analysis on WES data. For instance, in patients with osteogenesis imperfecta type V and in patients with glucocorticoid deficiency, we identified variants in 5'UTR, resulting in the production of longer or truncating non-functional proteins. Furthermore, CNVs were identified as the main cause of the diseases in patients with metaphyseal dysplasia with maxillary hypoplasia and brachydactyly and in patients with osteogenesis imperfecta type VII. Conclusions: Our study highlights the importance of considering non-coding variants and CNVs during interpretation of WES data, as they can be the only cause of disease under investigation.

Keywords: whole exome sequencing data, non-coding variants, copy number variations, rare diseases

Procedia PDF Downloads 388
641 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems

Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song

Abstract:

Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.

Keywords: cooperative communication, space-time block coding, pre-coding

Procedia PDF Downloads 329
640 Fast Prediction Unit Partition Decision and Accelerating the Algorithm Using Cudafor Intra and Inter Prediction of HEVC

Authors: Qiang Zhang, Chun Yuan

Abstract:

Since the PU (Prediction Unit) decision process is the most time consuming part of the emerging HEVC (High Efficient Video Coding) standardin intra and inter frame coding, this paper proposes the fast PU decision algorithm and speed up the algorithm using CUDA (Compute Unified Device Architecture). In intra frame coding, the fast PU decision algorithm uses the texture features to skip intra-frame prediction or terminal the intra-frame prediction for smaller PU size. In inter frame coding of HEVC, the fast PU decision algorithm takes use of the similarity of its own two Nx2N size PU's motion vectors and the hierarchical structure of CU (Coding Unit) partition to skip some modes of PU partition, so as to reduce the motion estimation times. The accelerate algorithm using CUDA is based on the fast PU decision algorithm which uses the GPU to make the motion search and the gradient computation could be parallel computed. The proposed algorithm achieves up to 57% time saving compared to the HM 10.0 with little rate-distortion losses (0.043dB drop and 1.82% bitrate increase on average).

Keywords: HEVC, PU decision, inter prediction, intra prediction, CUDA, parallel

Procedia PDF Downloads 370
639 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 114
638 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 386
637 Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks

Authors: Guanghua Zhang, Fubao Wang, Weijun Duan

Abstract:

Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output.

Keywords: convolution neural network, discriminator, generator, unsupervised learning

Procedia PDF Downloads 236
636 Estimating Cyclone Intensity Using INSAT-3D IR Images Based on Convolution Neural Network Model

Authors: Divvela Vishnu Sai Kumar, Deepak Arora, Sheenu Rizvi

Abstract:

Forecasting a cyclone through satellite images consists of the estimation of the intensity of the cyclone and predicting it before a cyclone comes. This research work can help people to take safety measures before the cyclone comes. The prediction of the intensity of a cyclone is very important to save lives and minimize the damage caused by cyclones. These cyclones are very costliest natural disasters that cause a lot of damage globally due to a lot of hazards. Authors have proposed five different CNN (Convolutional Neural Network) models that estimate the intensity of cyclones through INSAT-3D IR images. There are a lot of techniques that are used to estimate the intensity; the best model proposed by authors estimates intensity with a root mean squared error (RMSE) of 10.02 kts.

Keywords: estimating cyclone intensity, deep learning, convolution neural network, prediction models

Procedia PDF Downloads 83
635 Defect Detection for Nanofibrous Images with Deep Learning-Based Approaches

Authors: Gaokai Liu

Abstract:

Automatic defect detection for nanomaterial images is widely required in industrial scenarios. Deep learning approaches are considered as the most effective solutions for the great majority of image-based tasks. In this paper, an edge guidance network for defect segmentation is proposed. First, the encoder path with multiple convolution and downsampling operations is applied to the acquisition of shared features. Then two decoder paths both are connected to the last convolution layer of the encoder and supervised by the edge and segmentation labels, respectively, to guide the whole training process. Meanwhile, the edge and encoder outputs from the same stage are concatenated to the segmentation corresponding part to further tune the segmentation result. Finally, the effectiveness of the proposed method is verified via the experiments on open nanofibrous datasets.

Keywords: deep learning, defect detection, image segmentation, nanomaterials

Procedia PDF Downloads 115
634 Cognitive SATP for Airborne Radar Based on Slow-Time Coding

Authors: Fanqiang Kong, Jindong Zhang, Daiyin Zhu

Abstract:

Space-time adaptive processing (STAP) techniques have been motivated as a key enabling technology for advanced airborne radar applications. In this paper, the notion of cognitive radar is extended to STAP technique, and cognitive STAP is discussed. The principle for improving signal-to-clutter ratio (SCNR) based on slow-time coding is given, and the corresponding optimization algorithm based on cyclic and power-like algorithms is presented. Numerical examples show the effectiveness of the proposed method.

Keywords: space-time adaptive processing (STAP), airborne radar, signal-to-clutter ratio, slow-time coding

Procedia PDF Downloads 243
633 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network

Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba

Abstract:

Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.

Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network

Procedia PDF Downloads 193
632 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance

Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane

Abstract:

Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.

Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)

Procedia PDF Downloads 388
631 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 91
630 Survivable IP over WDM Network Design Based on 1 ⊕ 1 Network Coding

Authors: Nihed Bahria El Asghar, Imen Jouili, Mounir Frikha

Abstract:

Inter-datacenter transport network is very bandwidth and delay demanding. The data transferred over such a network is also highly QoS-exigent mostly because a huge volume of data should be transported transparently with regard to the application user. To avoid the data transfer failure, a backup path should be reserved. No re-routing delay should be observed. A dedicated 1+1 protection is however not applicable in inter-datacenter transport network because of the huge spare capacity. In this context, we propose a survivable virtual network with minimal backup based on network coding (1 ⊕ 1) and solve it using a modified Dijkstra-based heuristic.

Keywords: network coding, dedicated protection, spare capacity, inter-datacenters transport network

Procedia PDF Downloads 419
629 Human Posture Estimation Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.

Keywords: multi-view, pose estimation, ST-GCN, joint fusion

Procedia PDF Downloads 36
628 Performance Analysis and Comparison of Various 1-D and 2-D Prime Codes for OCDMA Systems

Authors: Gurjit Kaur, Shashank Johri, Arpit Mehrotra

Abstract:

In this paper we have analyzed and compared the performance of various coding schemes. The basic ID prime sequence codes are unique in only dimension i.e. time slots whereas 2D coding techniques are not unique by their time slots but with their wavelengths also. In this research we have evaluated and compared the performance of 1D and 2D coding techniques constructed using prime sequence coding pattern for OCDMA system on a single platform. Results shows that 1D Extended Prime Code (EPC) can support more number of active users compared to other codes but at the expense of larger code length which further increases the complexity of the code. Modified Prime Code (MPC) supports lesser number of active users at λc=2 but it has a lesser code length as compared to 1D prime code. Analysis shows that 2D prime code supports lesser number of active users than 1D codes but they are having large code family and are the most secure codes compared to other codes. The performance of all these codes is analyzed on basis of number of active users supported at a Bit Error Rate (BER) of 10-9.

Keywords: CDMA, OCDMA, BER, OOC, PC, EPC, MPC, 2-D PC/PC, λc, λa

Procedia PDF Downloads 479
627 Secure Network Coding-Based Named Data Network Mutual Anonymity Transfer Protocol

Authors: Tao Feng, Fei Xing, Ye Lu, Jun Li Fang

Abstract:

NDN is a kind of future Internet architecture. Due to the NDN design introduces four privacy challenges,Many research institutions began to care about the privacy issues of naming data network(NDN).In this paper, we are in view of the major NDN’s privacy issues to investigate privacy protection,then put forwards more effectively anonymous transfer policy for NDN.Firstly,based on mutual anonymity communication for MP2P networks,we propose NDN mutual anonymity protocol.Secondly,we add interest package authentication mechanism in the protocol and encrypt the coding coefficient, security of this protocol is improved by this way.Finally, we proof the proposed anonymous transfer protocol security and anonymity.

Keywords: NDN, mutual anonymity, anonymous routing, network coding, authentication mechanism

Procedia PDF Downloads 417
626 Edge Detection Using Multi-Agent System: Evaluation on Synthetic and Medical MR Images

Authors: A. Nachour, L. Ouzizi, Y. Aoura

Abstract:

Recent developments on multi-agent system have brought a new research field on image processing. Several algorithms are used simultaneously and improved in deferent applications while new methods are investigated. This paper presents a new automatic method for edge detection using several agents and many different actions. The proposed multi-agent system is based on parallel agents that locally perceive their environment, that is to say, pixels and additional environmental information. This environment is built using Vector Field Convolution that attract free agent to the edges. Problems of partial, hidden or edges linking are solved with the cooperation between agents. The presented method was implemented and evaluated using several examples on different synthetic and medical images. The obtained experimental results suggest that this approach confirm the efficiency and accuracy of detected edge.

Keywords: edge detection, medical MRImages, multi-agent systems, vector field convolution

Procedia PDF Downloads 360
625 ICanny: CNN Modulation Recognition Algorithm

Authors: Jingpeng Gao, Xinrui Mao, Zhibin Deng

Abstract:

Aiming at the low recognition rate on the composite signal modulation in low signal to noise ratio (SNR), this paper proposes a modulation recognition algorithm based on ICanny-CNN. Firstly, the radar signal is transformed into the time-frequency image by Choi-Williams Distribution (CWD). Secondly, we propose an image processing algorithm using the Guided Filter and the threshold selection method, which is combined with the hole filling and the mask operation. Finally, the shallow convolutional neural network (CNN) is combined with the idea of the depth-wise convolution (Dw Conv) and the point-wise convolution (Pw Conv). The proposed CNN is designed to complete image classification and realize modulation recognition of radar signal. The simulation results show that the proposed algorithm can reach 90.83% at 0dB and 71.52% at -8dB. Therefore, the proposed algorithm has a good classification and anti-noise performance in radar signal modulation recognition and other fields.

Keywords: modulation recognition, image processing, composite signal, improved Canny algorithm

Procedia PDF Downloads 162
624 Convolution Neural Network Based on Hypnogram of Sleep Stages to Predict Dosages and Types of Hypnotic Drugs for Insomnia

Authors: Chi Wu, Dean Wu, Wen-Te Liu, Cheng-Yu Tsai, Shin-Mei Hsu, Yin-Tzu Lin, Ru-Yin Yang

Abstract:

Background: The results of previous studies compared the benefits and risks of receiving insomnia medication. However, the effects between hypnotic drugs used and enhancement of sleep quality were still unclear. Objective: The aim of this study is to establish a prediction model for hypnotic drugs' dosage used for insomnia subjects and associated the relationship between sleep stage ratio change and drug types. Methodologies: According to American Academy of Sleep Medicine (AASM) guideline, sleep stages were classified and transformed to hypnogram via the polysomnography (PSG) in a hospital in New Taipei City (Taiwan). The subjects with diagnosis for insomnia without receiving hypnotic drugs treatment were be set as the comparison group. Conversely, hypnotic drugs dosage within the past three months was obtained from the clinical registration for each subject. Furthermore, the collecting subjects were divided into two groups for training and testing. After training convolution neuron network (CNN) to predict types of hypnotics used and dosages are taken, the test group was used to evaluate the accuracy of classification. Results: We recruited 76 subjects in this study, who had been done PSG for transforming hypnogram from their sleep stages. The accuracy of dosages obtained from confusion matrix on the test group by CNN is 81.94%, and accuracy of hypnotic drug types used is 74.22%. Moreover, the subjects with high ratio of wake stage were correctly classified as requiring medical treatment. Conclusion: CNN with hypnogram was potentially used for adjusting the dosage of hypnotic drugs and providing subjects to pre-screening the types of hypnotic drugs taken.

Keywords: convolution neuron network, hypnotic drugs, insomnia, polysomnography

Procedia PDF Downloads 158
623 A Study on Using Network Coding for Packet Transmissions in Wireless Sensor Networks

Authors: Rei-Heng Cheng, Wen-Pinn Fang

Abstract:

A wireless sensor network (WSN) is composed by a large number of sensors and one or a few base stations, where the sensor is responsible for detecting specific event information, which is sent back to the base station(s). However, how to save electricity consumption to extend the network lifetime is a problem that cannot be ignored in the wireless sensor networks. Since the sensor network is used to monitor a region or specific events, how the information can be reliably sent back to the base station is surly important. Network coding technique is often used to enhance the reliability of the network transmission. When a node needs to send out M data packets, it encodes these data with redundant data and sends out totally M + R packets. If the receiver can get any M packets out from these M + R packets, it can decode and get the original M data packets. To transmit redundant packets will certainly result in the excess energy consumption. This paper will explore relationship between the quality of wireless transmission and the number of redundant packets. Hopefully, each sensor can overhear the nearby transmissions, learn the wireless transmission quality around it, and dynamically determine the number of redundant packets used in network coding.

Keywords: energy consumption, network coding, transmission reliability, wireless sensor networks

Procedia PDF Downloads 362
622 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.

Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia

Procedia PDF Downloads 106
621 Analysis of Cooperative Hybrid ARQ with Adaptive Modulation and Coding on a Correlated Fading Channel Environment

Authors: Ibrahim Ozkan

Abstract:

In this study, a cross-layer design which combines adaptive modulation and coding (AMC) and hybrid automatic repeat request (HARQ) techniques for a cooperative wireless network is investigated analytically. Previous analyses of such systems in the literature are confined to the case where the fading channel is independent at each retransmission, which can be unrealistic unless the channel is varying very fast. On the other hand, temporal channel correlation can have a significant impact on the performance of HARQ systems. In this study, utilizing a Markov channel model which accounts for the temporal correlation, the performance of non-cooperative and cooperative networks are investigated in terms of packet loss rate and throughput metrics for Chase combining HARQ strategy.

Keywords: cooperative network, adaptive modulation and coding, hybrid ARQ, correlated fading

Procedia PDF Downloads 110
620 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 333
619 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 259
618 Gene Prediction in DNA Sequences Using an Ensemble Algorithm Based on Goertzel Algorithm and Anti-Notch Filter

Authors: Hamidreza Saberkari, Mousa Shamsi, Hossein Ahmadi, Saeed Vaali, , MohammadHossein Sedaaghi

Abstract:

In the recent years, using signal processing tools for accurate identification of the protein coding regions has become a challenge in bioinformatics. Most of the genomic signal processing methods is based on the period-3 characteristics of the nucleoids in DNA strands and consequently, spectral analysis is applied to the numerical sequences of DNA to find the location of periodical components. In this paper, a novel ensemble algorithm for gene selection in DNA sequences has been presented which is based on the combination of Goertzel algorithm and anti-notch filter (ANF). The proposed algorithm has many advantages when compared to other conventional methods. Firstly, it leads to identify the coding protein regions more accurate due to using the Goertzel algorithm which is tuned at the desired frequency. Secondly, faster detection time is achieved. The proposed algorithm is applied on several genes, including genes available in databases BG570 and HMR195 and their results are compared to other methods based on the nucleotide level evaluation criteria. Implementation results show the excellent performance of the proposed algorithm in identifying protein coding regions, specifically in identification of small-scale gene areas.

Keywords: protein coding regions, period-3, anti-notch filter, Goertzel algorithm

Procedia PDF Downloads 364
617 The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement

Authors: Shibo Wei, Ting Jiang

Abstract:

Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI).

Keywords: convolutional-recurrent neural networks, speech enhancement, residual LSTM, SI-SNR

Procedia PDF Downloads 169
616 Multimodal Convolutional Neural Network for Musical Instrument Recognition

Authors: Yagya Raj Pandeya, Joonwhoan Lee

Abstract:

The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network.

Keywords: multimodal, 3D convolution, music-video feature extraction, generalized mean

Procedia PDF Downloads 186
615 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 412
614 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 97
613 New Efficient Method for Coding Color Images

Authors: Walaa M.Abd-Elhafiez, Wajeb Gharibi

Abstract:

In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like, CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique which is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio.

Keywords: image compression, color image, q-coder, quantization, edge-detection

Procedia PDF Downloads 306