Search results for: network coding signature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5481

Search results for: network coding signature

5421 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks

Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu

Abstract:

The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).

Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding

Procedia PDF Downloads 93
5420 Whole Exome Sequencing Data Analysis of Rare Diseases: Non-Coding Variants and Copy Number Variations

Authors: S. Fahiminiya, J. Nadaf, F. Rauch, L. Jerome-Majewska, J. Majewski

Abstract:

Background: Sequencing of protein coding regions of human genome (Whole Exome Sequencing; WES), has demonstrated a great success in the identification of causal mutations for several rare genetic disorders in human. Generally, most of WES studies have focused on rare variants in coding exons and splicing-sites where missense substitutions lead to the alternation of protein product. Although focusing on this category of variants has revealed the mystery behind many inherited genetic diseases in recent years, a subset of them remained still inconclusive. Here, we present the result of our WES studies where analyzing only rare variants in coding regions was not conclusive but further investigation revealed the involvement of non-coding variants and copy number variations (CNV) in etiology of the diseases. Methods: Whole exome sequencing was performed using our standard protocols at Genome Quebec Innovation Center, Montreal, Canada. All bioinformatics analyses were done using in-house WES pipeline. Results: To date, we successfully identified several disease causing mutations within gene coding regions (e.g. SCARF2: Van den Ende-Gupta syndrome and SNAP29: 22q11.2 deletion syndrome) by using WES. In addition, we showed that variants in non-coding regions and CNV have also important value and should not be ignored and/or filtered out along the way of bioinformatics analysis on WES data. For instance, in patients with osteogenesis imperfecta type V and in patients with glucocorticoid deficiency, we identified variants in 5'UTR, resulting in the production of longer or truncating non-functional proteins. Furthermore, CNVs were identified as the main cause of the diseases in patients with metaphyseal dysplasia with maxillary hypoplasia and brachydactyly and in patients with osteogenesis imperfecta type VII. Conclusions: Our study highlights the importance of considering non-coding variants and CNVs during interpretation of WES data, as they can be the only cause of disease under investigation.

Keywords: whole exome sequencing data, non-coding variants, copy number variations, rare diseases

Procedia PDF Downloads 419
5419 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems

Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song

Abstract:

Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.

Keywords: cooperative communication, space-time block coding, pre-coding

Procedia PDF Downloads 360
5418 Fast Prediction Unit Partition Decision and Accelerating the Algorithm Using Cudafor Intra and Inter Prediction of HEVC

Authors: Qiang Zhang, Chun Yuan

Abstract:

Since the PU (Prediction Unit) decision process is the most time consuming part of the emerging HEVC (High Efficient Video Coding) standardin intra and inter frame coding, this paper proposes the fast PU decision algorithm and speed up the algorithm using CUDA (Compute Unified Device Architecture). In intra frame coding, the fast PU decision algorithm uses the texture features to skip intra-frame prediction or terminal the intra-frame prediction for smaller PU size. In inter frame coding of HEVC, the fast PU decision algorithm takes use of the similarity of its own two Nx2N size PU's motion vectors and the hierarchical structure of CU (Coding Unit) partition to skip some modes of PU partition, so as to reduce the motion estimation times. The accelerate algorithm using CUDA is based on the fast PU decision algorithm which uses the GPU to make the motion search and the gradient computation could be parallel computed. The proposed algorithm achieves up to 57% time saving compared to the HM 10.0 with little rate-distortion losses (0.043dB drop and 1.82% bitrate increase on average).

Keywords: HEVC, PU decision, inter prediction, intra prediction, CUDA, parallel

Procedia PDF Downloads 399
5417 Peak Frequencies in the Collective Membrane Potential of a Hindmarsh-Rose Small-World Neural Network

Authors: Sun Zhe, Ruggero Micheletto

Abstract:

As discussed extensively in many studies, noise in neural networks have an important role in the functioning and time evolution of the system. The mechanism by which noise induce stochastic resonance enhancing and influencing certain operations is not clarified nor is the mechanism of information storage and coding. With the present research we want to study the role of noise, especially focusing on the frequency peaks in a three variable Hindmarsh−Rose Small−World network. We investigated the behaviour of the network to external noises. We demonstrate that a variation of signal to noise ratio of about 10 dB induces an increase in membrane potential signal of about 15%, averaged over the whole network. We also considered the integral of the whole membrane potential as a paradigm of internal noise, the one generated by the brain network. We showed that this internal noise is attenuated with the size of the network or with the number of random connections. By means of Fourier analysis we found that it has distinct peaks of frequencies, moreover, we showed that increasing the size of the network introducing more neurons, reduced the maximum frequencies generated by the network, whereas the increase in the number of random connections (determined by the small-world probability p) led to a trend toward higher frequencies. This study may give clues on how networks utilize noise to alter the collective behaviour of the system in their operations.

Keywords: neural networks, stochastic processes, small-world networks, discrete Fourier analysis

Procedia PDF Downloads 292
5416 Keyword Network Analysis on the Research Trends of Life-Long Education for People with Disabilities in Korea

Authors: Jakyoung Kim, Sungwook Jang

Abstract:

The purpose of this study is to examine the research trends of life-long education for people with disabilities using a keyword network analysis. For this purpose, 151 papers were selected from 594 papers retrieved using keywords such as 'people with disabilities' and 'life-long education' in the Korean Education and Research Information Service. The Keyword network analysis was constructed by extracting and coding the keyword used in the title of the selected papers. The frequency of the extracted keywords, the centrality of degree, and betweenness was analyzed by the keyword network. The results of the keyword network analysis are as follows. First, the main keywords that appeared frequently in the study of life-long education for people with disabilities were 'people with disabilities', 'life-long education', 'developmental disabilities', 'current situations', 'development'. The research trends of life-long education for people with disabilities are focused on the current status of the life-long education and the program development. Second, the keyword network analysis and visualization showed that the keywords with high frequency of occurrences also generally have high degree centrality and betweenness centrality. In terms of the keyword network diagram, it was confirmed that research trends of life-long education for people with disabilities are centered on six prominent keywords. Based on these results, it was discussed that life-long education for people with disabilities in the future needs to expand the subjects and the supporting areas of the life-long education, and the research needs to be further expanded into more detailed and specific areas. 

Keywords: life-long education, people with disabilities, research trends, keyword network analysis

Procedia PDF Downloads 338
5415 Combination between Intrusion Systems and Honeypots

Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal

Abstract:

Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.

Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor

Procedia PDF Downloads 383
5414 Cognitive SATP for Airborne Radar Based on Slow-Time Coding

Authors: Fanqiang Kong, Jindong Zhang, Daiyin Zhu

Abstract:

Space-time adaptive processing (STAP) techniques have been motivated as a key enabling technology for advanced airborne radar applications. In this paper, the notion of cognitive radar is extended to STAP technique, and cognitive STAP is discussed. The principle for improving signal-to-clutter ratio (SCNR) based on slow-time coding is given, and the corresponding optimization algorithm based on cyclic and power-like algorithms is presented. Numerical examples show the effectiveness of the proposed method.

Keywords: space-time adaptive processing (STAP), airborne radar, signal-to-clutter ratio, slow-time coding

Procedia PDF Downloads 273
5413 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance

Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane

Abstract:

Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.

Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)

Procedia PDF Downloads 422
5412 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 120
5411 Detailed Observations on Numerically Invariant Signatures

Authors: Reza Aghayan

Abstract:

Numerically invariant signatures were introduced as a new paradigm of the invariant recognition for visual objects modulo a certain group of transformations. This paper shows that the current formulation suffers from noise and indeterminacy in the resulting joint group-signatures and applies the n-difference technique and the m-mean signature method to minimize their effects. In our experimental results of applying the proposed numerical scheme to generate joint group-invariant signatures, the sensitivity of some parameters such as regularity and mesh resolution used in the algorithm will also be examined. Finally, several interesting observations are made.

Keywords: Euclidean and affine geometry, differential invariant G-signature curves, numerically invariant joint G-signatures, object recognition, noise, indeterminacy

Procedia PDF Downloads 400
5410 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City

Authors: Christian Kapuku, Seung-Young Kho

Abstract:

An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.

Keywords: geographic information system (GIS), network construction, transportation database, open source data

Procedia PDF Downloads 168
5409 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network

Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar

Abstract:

Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.

Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network

Procedia PDF Downloads 109
5408 Network Security Attacks and Defences

Authors: Ranbir Singh, Deepinder Kaur

Abstract:

Network security is an important aspect in every field like government offices, Educational Institute and any business organization. Network security consists of the policies adopted to prevent and monitor forbidden access, misuse, modification, or denial of a computer network. Network security is very complicated subject and deal by only well trained and experienced people. However, as more and more people become wired, an increasing number of people need to understand the basics of security in a networked world. The history of the network security included an introduction to the TCP/IP and interworking. Network security starts with authenticating, commonly with a username and a password. In this paper, we study about various types of attacks on network security and how to handle or prevent this attack.

Keywords: network security, attacks, denial, authenticating

Procedia PDF Downloads 405
5407 Performance Analysis and Comparison of Various 1-D and 2-D Prime Codes for OCDMA Systems

Authors: Gurjit Kaur, Shashank Johri, Arpit Mehrotra

Abstract:

In this paper we have analyzed and compared the performance of various coding schemes. The basic ID prime sequence codes are unique in only dimension i.e. time slots whereas 2D coding techniques are not unique by their time slots but with their wavelengths also. In this research we have evaluated and compared the performance of 1D and 2D coding techniques constructed using prime sequence coding pattern for OCDMA system on a single platform. Results shows that 1D Extended Prime Code (EPC) can support more number of active users compared to other codes but at the expense of larger code length which further increases the complexity of the code. Modified Prime Code (MPC) supports lesser number of active users at λc=2 but it has a lesser code length as compared to 1D prime code. Analysis shows that 2D prime code supports lesser number of active users than 1D codes but they are having large code family and are the most secure codes compared to other codes. The performance of all these codes is analyzed on basis of number of active users supported at a Bit Error Rate (BER) of 10-9.

Keywords: CDMA, OCDMA, BER, OOC, PC, EPC, MPC, 2-D PC/PC, λc, λa

Procedia PDF Downloads 510
5406 An Earth Mover’s Distance Algorithm Based DDoS Detection Mechanism in SDN

Authors: Yang Zhou, Kangfeng Zheng, Wei Ni, Ren Ping Liu

Abstract:

Software-defined networking (SDN) provides a solution for scalable network framework with decoupled control and data plane. However, this architecture also induces a particular distributed denial-of-service (DDoS) attack that can affect or even overwhelm the SDN network. DDoS attack detection problem has to date been mostly researched as entropy comparison problem. However, this problem lacks the utilization of SDN, and the results are not accurate. In this paper, we propose a DDoS attack detection method, which interprets DDoS detection as a signature matching problem and is formulated as Earth Mover’s Distance (EMD) model. Considering the feasibility and accuracy, we further propose to define the cost function of EMD to be a generalized Kullback-Leibler divergence. Simulation results show that our proposed method can detect DDoS attacks by comparing EMD values with the ones computed in the case without attacks. Moreover, our method can significantly increase the true positive rate of detection.

Keywords: DDoS detection, EMD, relative entropy, SDN

Procedia PDF Downloads 340
5405 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.

Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia

Procedia PDF Downloads 131
5404 The Family Resemblance in the Handwriting of Painters: Jacek and Rafał Malczewski’s Case

Authors: Olivia Rybak-Karkosz

Abstract:

This paper aims to present the results of scientific research on family resemblance in the handwriting of painters. Such a problem is known in handwriting analysis, but it was never a research subject in the scope of painters' signatures on works of art. For this research, the author chose Jacek, and Rafał Malczewski (father and son) as many of their paintings are in museums, and most of them are signed. The aim was to create a catalogue of traits similar to the handwriting of both artists. Such data could be helpful for the expert’s opinion in the decision-making process to establish whether the signature is authentic and, if so, whether it is the artist whose signature is analysed, not the other family member. There are known examples of relatives of the artists who signed their works. Many of them were artists themselves. For instance Andrzej Wróblewski’s mother, Krystyna was a printmaker. To save his legacy, she signed many of her son’s works after his death using his name. This research methodology consisted of completing representative samples of signatures of both artists, which were collected in selected Polish museums. Then a catalogue of traits was created using a forensic handwriting graphic-comparative method (graphic method). The paper contains a concluding statement that it could be one of the elements of research in an expert’s analysis of the authenticity of the signature on paintings.

Keywords: artist’s signatures, authenticity of an artwork, forensic handwriting analysis, graphic-comparative method

Procedia PDF Downloads 84
5403 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning

Authors: Leigh Ann Wilson, Melanie Borrego

Abstract:

The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.

Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments

Procedia PDF Downloads 280
5402 TRNG Based Key Generation for Certificateless Signcryption

Authors: S.Balaji, R.Sujatha, M. Ramakrishnan

Abstract:

Signcryption is a cryptographic primitive that fulfills both the functions of digital signature and public key encryption simultaneously in low cost when compared with the traditional signature-then-encryption approach. In this paper, we propose a novel mouse movement based key generation technique to generate secret keys which is secure against the outer and insider attacks. Tag Key Encapsulation Mechanism (KEM) process is implemented using True Random Number Generator (TRNG) method. This TRNG based key is used for data encryption in the Data Encapsulation Mechanism (DEM). We compare the statistical reports of the proposed system with the previous methods which implements TKEM based on pseudo random number generator

Keywords: pseudo random umber generator, signcryption, true random number generator, node deployment

Procedia PDF Downloads 343
5401 Enhancing the Network Security with Gray Code

Authors: Thomas Adi Purnomo Sidhi

Abstract:

Nowadays, network is an essential need in almost every part of human daily activities. People now can seamlessly connect to others through the Internet. With advanced technology, our personal data now can be more easily accessed. One of many components we are concerned for delivering the best network is a security issue. This paper is proposing a method that provides more options for security. This research aims to improve network security by focusing on the physical layer which is the first layer of the OSI model. The layer consists of the basic networking hardware transmission technologies of a network. With the use of observation method, the research produces a schematic design for enhancing the network security through the gray code converter.

Keywords: network, network security, grey code, physical layer

Procedia PDF Downloads 506
5400 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 365
5399 Network Functions Virtualization-Based Virtual Routing Function Deployment under Network Delay Constraints

Authors: Kenichiro Hida, Shin-Ichi Kuribayashi

Abstract:

NFV-based network implements a variety of network functions with software on general-purpose servers, and this allows the network operator to select any capabilities and locations of network functions without any physical constraints. In this paper, we evaluate the influence of the maximum tolerable network delay on the virtual routing function deployment guidelines which the authors proposed previously. Our evaluation results have revealed the following: (1) the more the maximum tolerable network delay condition becomes severe, the more the number of areas where the route selection function is installed increases and the total network cost increases, (2) the higher the routing function cost relative to the circuit bandwidth cost, the increase ratio of total network cost becomes larger according to the maximum tolerable network delay condition.

Keywords: NFV (Network Functions Virtualization), resource allocation, virtual routing function, minimum total network cost

Procedia PDF Downloads 247
5398 Trabecular Texture Analysis Using Fractal Metrics for Bone Fragility Assessment

Authors: Khaled Harrar, Rachid Jennane

Abstract:

The purpose of this study is the discrimination of 28 postmenopausal with osteoporotic femoral fractures from an age-matched control group of 28 women using texture analysis based on fractals. Two pre-processing approaches are applied on radiographic images; these techniques are compared to highlight the choice of the pre-processing method. Furthermore, the values of the fractal dimension are compared to those of the fractal signature in terms of the classification of the two populations. In a second analysis, the BMD measure at proximal femur was compared to the fractal analysis, the latter, which is a non-invasive technique, allowed a better discrimination; the results confirm that the fractal analysis of texture on calcaneus radiographs is able to discriminate osteoporotic patients with femoral fracture from controls. This discrimination was efficient compared to that obtained by BMD alone. It was also present in comparing subgroups with overlapping values of BMD.

Keywords: osteoporosis, fractal dimension, fractal signature, bone mineral density

Procedia PDF Downloads 425
5397 Latest Generation Conducted Electrical Weapon Dart Design: Signature Marking and Removal for the Emergency Medicine Professional

Authors: J. D. Ho, D. M. Dawes, B. Driver

Abstract:

Introduction: TASER Conducted Electrical Weapons (CEWs) are the dominant CEWs in use and have been used in modern police and military operations since the late 1990s as a form of non-lethal weaponry. The 3rd generation of CEWs has been recently introduced and is known as The TASER 7. This new CEW will be replacing current CEW technology and has a new dart design that is important for emergency medical professionals to be familiar with because it requires a different method of removal and will leave a different marking pattern in human tissue than they may have been previously familiar with. features of this new dart design include: higher velocity impact, larger impact surface area, break away dart body segment, dual back-barb retention, newly designed removal process. As the TASER 7 begins to be deployed by the police and military personnel, these new features make it imperative that emergency medical professionals become familiar with the signature markings that this new dart design will make on human tissue and how to remove them. Methods: Multiple observational studies using high speed photography were used to record impact patterns of the new dart design on fresh tissue and also the newly recommended dart removal process. Both animal and human subjects were used to test this dart design prior to production release. Results: Data presented will include dart design overview, flight pattern accuracy, impact analysis, and dart removal example. Tissue photographs will be presented to demonstrate examples of signature TASER 7 dart markings that emergency medical professionals can expect to see. Conclusion: This work will provide the reader with an understanding of this newest generation CEW dart design, its key features, its signature marking pattern that can be expected and a recommendation of how to remove it from human tissue.

Keywords: TASER 7, conducted electrical weapon, dart mark, dart removal

Procedia PDF Downloads 155
5396 Gene Prediction in DNA Sequences Using an Ensemble Algorithm Based on Goertzel Algorithm and Anti-Notch Filter

Authors: Hamidreza Saberkari, Mousa Shamsi, Hossein Ahmadi, Saeed Vaali, , MohammadHossein Sedaaghi

Abstract:

In the recent years, using signal processing tools for accurate identification of the protein coding regions has become a challenge in bioinformatics. Most of the genomic signal processing methods is based on the period-3 characteristics of the nucleoids in DNA strands and consequently, spectral analysis is applied to the numerical sequences of DNA to find the location of periodical components. In this paper, a novel ensemble algorithm for gene selection in DNA sequences has been presented which is based on the combination of Goertzel algorithm and anti-notch filter (ANF). The proposed algorithm has many advantages when compared to other conventional methods. Firstly, it leads to identify the coding protein regions more accurate due to using the Goertzel algorithm which is tuned at the desired frequency. Secondly, faster detection time is achieved. The proposed algorithm is applied on several genes, including genes available in databases BG570 and HMR195 and their results are compared to other methods based on the nucleotide level evaluation criteria. Implementation results show the excellent performance of the proposed algorithm in identifying protein coding regions, specifically in identification of small-scale gene areas.

Keywords: protein coding regions, period-3, anti-notch filter, Goertzel algorithm

Procedia PDF Downloads 388
5395 Coronin 1C and miR-128A as Potential Diagnostic Biomarkers for Glioblastoma Multiform

Authors: Denis Mustafov, Emmanouil Karteris, Maria Braoudaki

Abstract:

Glioblastoma multiform (GBM) is a heterogenous primary brain tumour that kills most affected patients. To the authors best knowledge, despite all research efforts there is no early diagnostic biomarker for GBM. MicroRNAs (miRNAs) are short non-coding RNA molecules which are deregulated in many cancers. The aim of this research was to determine miRNAs with a diagnostic impact and to potentially identify promising therapeutic targets for glioblastoma multiform. In silico analysis was performed to identify deregulated miRNAs with diagnostic relevance for glioblastoma. The expression profiles of the chosen miRNAs were then validated in vitro in the human glioblastoma cell lines A172 and U-87MG. Briefly, RNA extraction was carried out using the Trizol method, whilst miRNA extraction was performed using the mirVANA miRNA isolation kit. Quantitative Real-Time Polymerase Chain Reaction was performed to verify their expression. The presence of five target proteins within the A172 cell line was evaluated by Western blotting. The expression of the CORO1C protein within 32 GBM cases was examined via immunohistochemistry. The miRNAs identified in silico included miR-21-5p, miR-34a and miR-128a. These miRNAs were shown to target deregulated GBM genes, such as CDK6, E2F3, BMI1, JAG1, and CORO1C. miR-34a and miR-128a showed low expression profiles in comparison to a control miR-RNU-44 in both GBM cell lines suggesting tumour suppressor properties. Opposing, miR-21-5p demonstrated greater expression indicating that it could potentially function as an oncomiR. Western blotting revealed expression of all five proteins within the A172 cell line. In silico analysis also suggested that CORO1C is a target of miR-128a and miR-34a. Immunohistochemistry demonstrated that 75% of the GBM cases showed moderate to high expression of CORO1C protein. Greater understanding of the deregulated expression of miR-128a and the upregulation of CORO1C in GBM could potentially lead to the identification of a promising diagnostic biomarker signature for glioblastomas.

Keywords: non-coding RNAs, gene expression, brain tumours, immunohistochemistry

Procedia PDF Downloads 91
5394 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 452
5393 New Efficient Method for Coding Color Images

Authors: Walaa M.Abd-Elhafiez, Wajeb Gharibi

Abstract:

In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like, CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique which is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio.

Keywords: image compression, color image, q-coder, quantization, edge-detection

Procedia PDF Downloads 330
5392 Study on Energy Performance Comparison of Information Centric Network Based on Difference of Network Architecture

Authors: Takumi Shindo, Koji Okamura

Abstract:

The first generation of the wide area network was circuit centric network. How the optimal circuit can be signed was the most important issue to get the best performance. This architecture had succeeded for line based telephone system. The second generation was host centric network and Internet based on this architecture has very succeeded world widely. And Internet became as new social infrastructure. Currently the architecture of the network is based on the location of the information. This future network is called Information centric network (ICN). The information-centric network (ICN) has being researched by many projects and different architectures for implementation of ICN have been proposed. The goal of this study is to compare performances of those ICN architectures. In this paper, the authors propose general ICN model which can represent two typical ICN architectures and compare communication performances using request routing. Finally, simulation results are shown. Also, we assume that this network architecture should be adapt to energy on-demand routing.

Keywords: ICN, information centric network, CCN, energy

Procedia PDF Downloads 337