Search results for: unknown
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 856

Search results for: unknown

826 Microbial Dark Matter Analysis Using 16S rRNA Gene Metagenomics Sequences

Authors: Hana Barak, Alex Sivan, Ariel Kushmaro

Abstract:

Microorganisms are the most diverse and abundant life forms on Earth and account for a large portion of the Earth’s biomass and biodiversity. To date though, our knowledge regarding microbial life is lacking, as it is based mainly on information from cultivated organisms. Indeed, microbiologists have borrowed from astrophysics and termed the ‘uncultured microbial majority’ as ‘microbial dark matter’. The realization of how diverse and unexplored microorganisms are, actually stems from recent advances in molecular biology, and in particular from novel methods for sequencing microbial small subunit ribosomal RNA genes directly from environmental samples termed next-generation sequencing (NGS). This has led us to use NGS that generates several gigabases of sequencing data in a single experimental run, to identify and classify environmental samples of microorganisms. In metagenomics sequencing analysis (both 16S and shotgun), sequences are compared to reference databases that contain only small part of the existing microorganisms and therefore their taxonomy assignment may reveal groups of unknown microorganisms or origins. These unknowns, or the ‘microbial sequences dark matter’, are usually ignored in spite of their great importance. The goal of this work was to develop an improved bioinformatics method that enables more complete analyses of the microbial communities in numerous environments. Therefore, NGS was used to identify previously unknown microorganisms from three different environments (industrials wastewater, Negev Desert’s rocks and water wells at the Arava valley). 16S rRNA gene metagenome analysis of the microorganisms from those three environments produce about ~4 million reads for 75 samples. Between 0.1-12% of the sequences in each sample were tagged as ‘Unassigned’. Employing relatively simple methodology for resequencing of original gDNA samples through Sanger or MiSeq Illumina with specific primers, this study demonstrates that the mysterious ‘Unassigned’ group apparently contains sequences of candidate phyla. Those unknown sequences can be located on a phylogenetic tree and thus provide a better understanding of the ‘sequences dark matter’ and its role in the research of microbial communities and diversity. Studying this ‘dark matter’ will extend the existing databases and could reveal the hidden potential of the ‘microbial dark matter’.

Keywords: bacteria, bioinformatics, dark matter, Next Generation Sequencing, unknown

Procedia PDF Downloads 217
825 Effective Removal of Tetrodotoxin with Fiber Mat Containing Activated Charcoal

Authors: Min Sik Kim, Hwa Sung Shin

Abstract:

From 2013, small eel farms, which are located in Han River Estuary, South Korea suffer damage because of unknown massive perish. In the middle of discussion that the cause of perish could be environmental changes or waste water, a large amount of unknown nemertean was discovered during that time. Some nemerteans are known releasing neurotoxin substance. In this study, we isolated intestinal bacteria using selective media and conducted 16s rDNA microbial identification by gene alignment. As a result, there was a type of bacteria producing TTX, blocks sodium-channel inducing organism’s death. TTX production from the bacteria was confirmed by ELISA and liquid chromatography coupled with mass spectrometer. Additionally, the activated-charcoal which has an ability to absorb small molecules like toxin was applied to fibrous mesh to prevent ingestion of aquatic organisms and increase applicable area. The viability of zebrafish in the water with TTX and charcoal fiber mat were not decreased meaning it could be used for solving the perishing problem in fish farm.

Keywords: nemertean, TTX, fiber mat, activated charcoal, zebrafish

Procedia PDF Downloads 179
824 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement

Authors: Maatoug Hassine, Mourad Hrizi

Abstract:

In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.

Keywords: geometric inverse source problem, heat equation, topological optimization, topological sensitivity, Kohn-Vogelius formulation

Procedia PDF Downloads 270
823 Elvis Improved Method for Solving Simultaneous Equations in Two Variables with Some Applications

Authors: Elvis Adam Alhassan, Kaiyu Tian, Akos Konadu, Ernest Zamanah, Michael Jackson Adjabui, Ibrahim Justice Musah, Esther Agyeiwaa Owusu, Emmanuel K. A. Agyeman

Abstract:

In this paper, how to solve simultaneous equations using the Elvis improved method is shown. The Elvis improved method says; to make one variable in the first equation the subject; make the same variable in the second equation the subject; equate the results and simplify to obtain the value of the unknown variable; put the value of the variable found into one equation from the first or second steps and simplify for the remaining unknown variable. The difference between our Elvis improved method and the substitution method is that: with Elvis improved method, the same variable is made the subject in both equations, and the two resulting equations equated, unlike the substitution method where one variable is made the subject of only one equation and substituted into the other equation. After describing the Elvis improved method, findings from 100 secondary students and the views of 5 secondary tutors to demonstrate the effectiveness of the method are presented. The study's purpose is proved by hypothetical examples.

Keywords: simultaneous equations, substitution method, elimination method, graphical method, Elvis improved method

Procedia PDF Downloads 87
822 Stability and Performance Improvement of a Two-Degree-of-Freedom Robot under Interaction Using the Impedance Control

Authors: Seyed Reza Mirdehghan, Mohammad Reza Haeri Yazdi

Abstract:

In this paper, the stability and the performance of a two-degree-of-freedom robot under an interaction with a unknown environment has been investigated. The time when the robot returns to its initial position after an interaction and the primary resistance of the robot against the impact must be reduced. Thus, the applied torque on the motor will be reduced. The impedance control is an appropriate method for robot control in these conditions. The stability of the robot at interaction moment was transformed to be a robust stability problem. The dynamic of the unknown environment was modeled as a weight function and the stability of the robot under an interaction with the environment has been investigated using the robust control concept. To improve the performance of the system, a force controller has been designed which the normalized impedance after interaction has been reduced. The resistance of the robot has been considered as a normalized cost function and its value was 0.593. The results has showed reduction of resistance of the robot against impact and the reduction of convergence time by lower than one second.

Keywords: impedance control, control system, robots, interaction

Procedia PDF Downloads 394
821 Specified Human Motion Recognition and Unknown Hand-Held Object Tracking

Authors: Jinsiang Shaw, Pik-Hoe Chen

Abstract:

This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection.

Keywords: Automatic Tracking, Back Projection, Motion Recognition, Shoplifting

Procedia PDF Downloads 299
820 Investigation of Dynamic Heat Transfer in Masonry Walls

Authors: Joelle Al Fakhoury, Emilio Sassine, Yassine Cherif, Joseph Dgheim, Emmanuel Antczak

Abstract:

Hollow block masonry is the most used building technology in the Lebanese context. These blocks are manufactured in an artisanal way and have unknown thermal properties; their overall thermos-physical performance is thus unknown and also poorly investigated scientifically in both single wall and also double wall configurations. In this work, experimental measurements and numerical simulations are performed for a better understanding of the heat transfer in masonry walls. This study was realized using an experimental setup consisting of a masonry hollow block wall (0.1m x 1m x 1m) and two heat boxes, such that each covers one side of the wall. The first is a reference box having a constant interior temperature, and the other is a control box having an adjustable interior temperature. At first, the numerical model is validated using an experimental setup; then 3D numerical analyzes are held in order to investigate the effect of the air gap, the mortar joints, and the plastering on the thermal performance of masonry walls for a better understanding of the heat transfer process and the recommendation of suitable thermal improvements.

Keywords: masonry wall, hollow blocks, heat transfer, wall instrumentation, thermal improvement

Procedia PDF Downloads 196
819 Clustering Performance Analysis using New Correlation-Based Cluster Validity Indices

Authors: Nathakhun Wiroonsri

Abstract:

There are various cluster validity measures used for evaluating clustering results. One of the main objectives of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weaknesses that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal option that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points are located in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios, including the well-known iris data set and a real-world marketing application, have been conducted to compare the proposed validity indices with several well-known ones.

Keywords: clustering algorithm, cluster validity measure, correlation, data partitions, iris data set, marketing, pattern recognition

Procedia PDF Downloads 81
818 To Know the Way to the Unknown: A Semi-Experimental Study on the Implication of Skills and Knowledge for Creative Processes in Higher Education

Authors: Mikkel Snorre Wilms Boysen

Abstract:

From a theoretical perspective, expertise is generally considered a precondition for creativity. The assumption is that an individual needs to master the common and accepted rules and techniques within a certain knowledge-domain in order to create something new and valuable. However, real life cases, and a limited amount of empirical studies, demonstrate that this assumption may be overly simple. In this article, this question is explored through a number of semi-experimental case studies conducted within the fields of music, technology, and youth culture. The studies indicate that, in various ways, expertise plays an important part in creative processes. However, the case studies also indicate that expertise sometimes leads to an entrenched perspective, in the sense that knowledge and experience may work as a path into the well-known rather than into the unknown. In this article, these issues are explored with reference to different theoretical approaches to creativity and learning, including actor-network theory, the theory of blind variation and selective retention, and Csikszentmihalyi’s system model. Finally, some educational aspects and implications of this are discussed.

Keywords: creativity, expertise , education, technology

Procedia PDF Downloads 294
817 The Impact of the Lexical Quality Hypothesis and the Self-Teaching Hypothesis on Reading Ability

Authors: Anastasios Ntousas

Abstract:

The purpose of the following paper is to analyze the relationship between the lexical quality and the self-teaching hypothesis and their impact on the reading ability. The following questions emerged, is there a correlation between the effective reading experience that the lexical quality hypothesis proposes and the self-teaching hypothesis, would the ability to read by analogy facilitate and create stable, synchronized four-word representational, and would word morphological knowledge be a possible extension of the self-teaching hypothesis. The lexical quality hypothesis speculates that words include four representational attributes, phonology, orthography, morpho-syntax, and meaning. Those four-word representations work together to make word reading an effective task. A possible lack of knowledge in one of the representations might disrupt reading comprehension. The degree that the four-word features connect together makes high and low lexical word quality representations. When the four-word representational attributes connect together effectively, readers have a high lexical quality of words; however, when they hardly have a strong connection with each other, readers have a low lexical quality of words. Furthermore, the self-teaching hypothesis proposes that phonological recoding enables printed word learning. Phonological knowledge and reading experience facilitate the acquisition and consolidation of specific-word orthographies. The reading experience is related to strong reading comprehension. The more readers have contact with texts, the better readers they become. Therefore, their phonological knowledge, as the self-teaching hypothesis suggests, might have a facilitative impact on the consolidation of the orthographical, morphological-syntax and meaning representations of unknown words. The phonology of known words might activate effectively the rest of the representational features of words. Readers use their existing phonological knowledge of similarly spelt words to pronounce unknown words; a possible transference of this ability to read by analogy will appear with readers’ morphological knowledge. Morphemes might facilitate readers’ ability to pronounce and spell new unknown words in which they do not have lexical access. Readers will encounter unknown words with similarly phonemes and morphemes but with different meanings. Knowledge of phonology and morphology might support and increase reading comprehension. There was a careful selection, discussion of theoretical material and comparison of the two existing theories. Evidence shows that morphological knowledge improves reading ability and comprehension, so morphological knowledge might be a possible extension of the self-teaching hypothesis, the fundamental skill to read by analogy can be implemented to the consolidation of word – specific orthographies via readers’ morphological knowledge, and there is a positive correlation between effective reading experience and self-teaching hypothesis.

Keywords: morphology, orthography, reading ability, reading comprehension

Procedia PDF Downloads 93
816 Output-Feedback Control Design for a General Class of Systems Subject to Sampling and Uncertainties

Authors: Tomas Menard

Abstract:

The synthesis of output-feedback control law has been investigated by many researchers since the last century. While many results exist for the case of Linear Time Invariant systems whose measurements are continuously available, nowadays, control laws are usually implemented on micro-controller, then the measurements are discrete-time by nature. This fact has to be taken into account explicitly in order to obtain a satisfactory behavior of the closed-loop system. One considers here a general class of systems corresponding to an observability normal form and which is subject to uncertainties in the dynamics and sampling of the output. Indeed, in practice, the modeling of the system is never perfect, this results in unknown uncertainties in the dynamics of the model. We propose here an output feedback algorithm which is based on a linear state feedback and a continuous-discrete time observer. The main feature of the proposed control law is that only discrete-time measurements of the output are needed. Furthermore, it is formally proven that the state of the closed loop system exponentially converges toward the origin despite the unknown uncertainties. Finally, the performances of this control scheme are illustrated with simulations.

Keywords: dynamical systems, output feedback control law, sampling, uncertain systems

Procedia PDF Downloads 252
815 Logistic Model Tree and Expectation-Maximization for Pollen Recognition and Grouping

Authors: Endrick Barnacin, Jean-Luc Henry, Jack Molinié, Jimmy Nagau, Hélène Delatte, Gérard Lebreton

Abstract:

Palynology is a field of interest for many disciplines. It has multiple applications such as chronological dating, climatology, allergy treatment, and even honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time-consuming task that requires the intervention of experts in the field, which is becoming increasingly rare due to economic and social conditions. So, the automation of this task is a necessity. Pollen slides analysis is mainly a visual process as it is carried out with the naked eye. That is the reason why a primary method to automate palynology is the use of digital image processing. This method presents the lowest cost and has relatively good accuracy in pollen retrieval. In this work, we propose a system combining recognition and grouping of pollen. It consists of using a Logistic Model Tree to classify pollen already known by the proposed system while detecting any unknown species. Then, the unknown pollen species are divided using a cluster-based approach. Success rates for the recognition of known species have been achieved, and automated clustering seems to be a promising approach.

Keywords: pollen recognition, logistic model tree, expectation-maximization, local binary pattern

Procedia PDF Downloads 153
814 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: dexel, process stability, material removal, milling

Procedia PDF Downloads 495
813 Smoking and Alcohol Consumption Predicts Multiple Head and Neck Cancers

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: It is well known that patients with Head and Neck Cancer (HNC) are at increased risk of subsequent head and neck cancers due to various aetiologies. Aim: We sought to determine the factors contributing to an increased risk of subsequent HNC primaries, and also to evaluate whether Aboriginal patients are at increased risk. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by site, histology, rurality, and age. We collected patient data including smoking and alcohol consumption, tumour and treatment data, and data on subsequent HNC primaries. Results: A subsequent HNC primary was seen in 37 patients (11.6%) overall. There was no significant difference in the rate of second primary HNCs between Aboriginal patients (12.5%) and nonAboriginal patients (11.2%) (p=0.408). Subsequent HNCs, were strongly associated with smoking and alcohol consumption however, with 95% of patients with a second primary being ever-smokers, and 54% of patients with a second primary having a history of excessive alcohol consumption. In the 37 patients with multiple HNC primaries, there were a total of 57 HNCs, with 29 patients having two primaries, six patients having 3 HNC primaries, one patient with four, and one with six. 54 out of the 57 cancers were in ever smokers (94.7%). There were only two multiple HNC primaries in a never smoker, non-drinker, and these cases were of unknown etiology with HPV/p16 status unknown in both cases. In the whole study population, there were 32 HPV-positive HNCs, and 67 p16-positive HNCs, with only two 2 nd HNCs in a p16-positive case, giving a rate of 3% in the p16+ population, which is actually much lower than the rate of second primaries seen in the overall population (11.6%), and was highest in the p16-negative population (15.7%). This suggests that p16-positivity is not a strong risk factor for subsequent primaries, and in fact p16-negativity appeared to be associated with increased risk, however this data is limited by the large number of patients without documented p16 status (45.3% overall, 12% for oropharyngeal, and 59.6% for oral cavity primaries had unknown p16 status). Summary: Subsequent HNC primaries were strongly associated with smoking and alcohol excess. Second and later HNC primaries did not appear to occur at increased rates in Aboriginal patients compared with non-Aboriginal patients, and p16-positivity did not predict increased risk, however p16-negativity was associated with an increased risk of subsequent HNCs.

Keywords: head and neck cancer, multiple primaries, aboriginal, p16 status, smoking, alcohol

Procedia PDF Downloads 44
812 Absolute Lymphocyte Count as Predictor of Pneumocystis Pneumonia in Patients With Unknown HIV Status at a Private Tertiary Hospital

Authors: Marja A. Bernardo, Coreena A. Bueser, Cybele Lara R. Abad, Raul V. Destura

Abstract:

Pneumocystis jirovecii pneumonia (PCP) is the most common opportunistic infection among people with HIV. Early consideration of PCP should be made even in patients whose HIV status is unknown as delay in treatment may be fatal. The use of absolute lymphocyte count (ALC) has been suggested as an alternative predictor of PCP especially in resource limited settings where PCR testing is costly or delayed. Objective: To determine whether the absolute lymphocyte count (ALC) can be used as a screening tool to predict Pneumocystis pneumonia in patients with unknown HIV status admitted at a private tertiary hospital. Methods: A retrospective cross-sectional study was conducted at a private tertiary medical center. Inpatient medical records of patients aged 18 years old and above from January 2012 to May 2014, in whom a clinical diagnosis of Pneumocystis jirovecii pneumonia was made were reviewed for inclusion. Demographic data, clinical features, hospital course, PCP PCR and HIV results were recorded. Independent t-test and chi-square analysis was used to determine any statistical difference between PCP-positive and PCP-negative groups. Mann-Whitney U-test was used for comparison of hospital stay. Results: There were no statistically significant differences in baseline characteristics between PCP positive and negative groups. While both the percent lymphocyte count (0.14 ± 0.13 vs 0.21 ± 0.16) and ALC (1160 ± 528.67 vs 1493.70 ± 988.61) were lower for the PCP-positive group, only the percent lymphocyte count reached a statistically significant difference (p= 0.067 vs p= 0.042). Conclusion: A quick determination of the ALC may be useful as an additional parameter to help screen for and diagnose pneumocystis pneumonia. In our study, the ALC of patients with PCP appear to be lower than in patients without PCP. A low ALC (e.g. below 1200) may help with the decision regarding empiric treatment. However, it should be used in conjunction with the patient’s clinical presentation, as well as other diagnostic tests. Larger, prospective studies incorporating the ALC with other clinical predictors are necessary to optimally predict those who would benefit from empiric or expedited management for potential PCP.

Keywords: Pneumocystis carinii pneumonia, Absolute Lymphocyte Count, infection, PCP

Procedia PDF Downloads 317
811 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks

Authors: Michael Josef Schwerer

Abstract:

Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.

Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy

Procedia PDF Downloads 16
810 De-Novo Structural Elucidation from Mass/NMR Spectra

Authors: Ismael Zamora, Elisabeth Ortega, Tatiana Radchenko, Guillem Plasencia

Abstract:

The structure elucidation based on Mass Spectra (MS) data of unknown substances is an unresolved problem that affects many different fields of application. The recent overview of software available for structure elucidation of small molecules has shown the demand for efficient computational tool that will be able to perform structure elucidation of unknown small molecules and peptides. We developed an algorithm for De-Novo fragment analysis based on MS data that proposes a set of scored and ranked structures that are compatible with the MS and MSMS spectra. Several different algorithms were developed depending on the initial set of fragments and the structure building processes. Also, in all cases, several scores for the final molecule ranking were computed. They were validated with small and middle databases (DB) with the eleven test set compounds. Similar results were obtained from any of the databases that contained the fragments of the expected compound. We presented an algorithm. Or De-Novo fragment analysis based on only mass spectrometry (MS) data only that proposed a set of scored/ranked structures that was validated on different types of databases and showed good results as proof of concept. Moreover, the solutions proposed by Mass Spectrometry were submitted to the prediction of NMR spectra in order to elucidate which of the proposed structures was compatible with the NMR spectra collected.

Keywords: De Novo, structure elucidation, mass spectrometry, NMR

Procedia PDF Downloads 257
809 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 95
808 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks

Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner

Abstract:

Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.

Keywords: USB, device, cyber security, attack, detection

Procedia PDF Downloads 357
807 Detection of Important Biological Elements in Drug-Drug Interaction Occurrence

Authors: Reza Ferdousi, Reza Safdari, Yadollah Omidi

Abstract:

Drug-drug interactions (DDIs) are main cause of the adverse drug reactions and nature of the functional and molecular complexity of drugs behavior in human body make them hard to prevent and treat. With the aid of new technologies derived from mathematical and computational science the DDIs problems can be addressed with minimum cost and efforts. Market basket analysis is known as powerful method to identify co-occurrence of thing to discover patterns and frequency of the elements. In this research, we used market basket analysis to identify important bio-elements in DDIs occurrence. For this, we collected all known DDIs from DrugBank. The obtained data were analyzed by market basket analysis method. We investigated all drug-enzyme, drug-carrier, drug-transporter and drug-target associations. To determine the importance of the extracted bio-elements, extracted rules were evaluated in terms of confidence and support. Market basket analysis of the over 45,000 known DDIs reveals more than 300 important rules that can be used to identify DDIs, CYP 450 family were the most frequent shared bio-elements. We applied extracted rules over 2,000,000 unknown drug pairs that lead to discovery of more than 200,000 potential DDIs. Analysis of the underlying reason behind the DDI phenomena can help to predict and prevent DDI occurrence. Ranking of the extracted rules based on strangeness of them can be a supportive tool to predict the outcome of an unknown DDI.

Keywords: drug-drug interaction, market basket analysis, rule discovery, important bio-elements

Procedia PDF Downloads 285
806 Role of Physiotherapist: How Their Job and Working Area Could Be Known

Authors: Juan Pablo Hervas-Perez, Jesus Guodemar-Perez, Montserrat Ruiz-Lopez, Elena Sonsoles Rodriguez-Lopez, Noemi Mayoral-Gonzalo, Eduardo Cimadevilla Fernandez-Pola, Mario Caballero-Corella

Abstract:

Physiotherapy is a healthcare discipline that covers many fields of action within the recovery and prevention of health. Some are well known, but others, such as working with newborns and premature children are not so. Physical therapist functions are well defined, but the impression of the population is that there are other professionals who can develop them, and a large part are unknown. Objective: To evaluate the level of knowledge of the sample on the role of the physiotherapist in general, and more specifically in the neonatal intensive care (NICU) units, and estimate your level of notions on the development centered care (DCC). Method: A descriptive, transversal, observational and prospective study developed on a 125 participants' sample. Results: From the sample studied, 87.2% had already had contact with physiotherapy previously. An 80.9% believed that the physiotherapist intervention was decisive for the cure, and 84.0% would recommend physiotherapy treatment to others. Of the total surveyed, 98.0% felt that the physiotherapist is who should run the physiotherapeutic treatments, but shares with other professions 71.0% of votes. The field's best-known work is rehabilitation (94.0%); Neonatology is on the 4th place (66.0% of votes). Conclusions: Many areas of work of physical therapy are unknown to a big part of the population, including the own health workers. Less than half of the sample meets the DCC, and only 58% of the interviewed physiotherapists know them.

Keywords: functions of physiotherapist, neonatal intensive care, physiotherapy, prematurity

Procedia PDF Downloads 297
805 Classification of Multiple Cancer Types with Deep Convolutional Neural Network

Authors: Nan Deng, Zhenqiu Liu

Abstract:

Thousands of patients with metastatic tumors were diagnosed with cancers of unknown primary sites each year. The inability to identify the primary cancer site may lead to inappropriate treatment and unexpected prognosis. Nowadays, a large amount of genomics and transcriptomics cancer data has been generated by next-generation sequencing (NGS) technologies, and The Cancer Genome Atlas (TCGA) database has accrued thousands of human cancer tumors and healthy controls, which provides an abundance of resource to differentiate cancer types. Meanwhile, deep convolutional neural networks (CNNs) have shown high accuracy on classification among a large number of image object categories. Here, we utilize 25 cancer primary tumors and 3 normal tissues from TCGA and convert their RNA-Seq gene expression profiling to color images; train, validate and test a CNN classifier directly from these images. The performance result shows that our CNN classifier can archive >80% test accuracy on most of the tumors and normal tissues. Since the gene expression pattern of distant metastases is similar to their primary tumors, the CNN classifier may provide a potential computational strategy on identifying the unknown primary origin of metastatic cancer in order to plan appropriate treatment for patients.

Keywords: bioinformatics, cancer, convolutional neural network, deep leaning, gene expression pattern

Procedia PDF Downloads 266
804 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.

Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering

Procedia PDF Downloads 437
803 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 102
802 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 231
801 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller

Authors: Alireza Dantism

Abstract:

Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.

Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller

Procedia PDF Downloads 59
800 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 209
799 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 265
798 Analysis of the Scattered Fields by Dielectric Sphere Inside Different Dielectric Mediums: The Case of the Source and Observation Point Is Reciprocal

Authors: Emi̇ne Avşar Aydin, Nezahat Günenç Tuncel, A. Hami̇t Serbest

Abstract:

The electromagnetic scattering from a canonical structure is an important issue in electromagnetic theory. In this study, the electromagnetic scattering from a dielectric sphere with oblique incidence is investigated. The incident field is considered as a plane wave with H polarized. The scattered and transmitted field expressions with unknown coefficients are written. The unknown coefficients are obtained by using exact boundary conditions. Then, the sphere is considered as having frequency dependent dielectric permittivity. The frequency dependence is shown by Cole-Cole model. The far scattered field expressions are found respect to different incidence angles in the 1-8 GHz frequency range. The observation point is the angular distance of pi from an incident wave. While an incident wave comes with a certain angle, observation point turns from 0 to 360 degrees. According to this, scattered field amplitude is maximum at the location of the incident wave, scattered field amplitude is minimum at the across incident wave. Also, the scattered fields are plotted versus frequency to show frequency-dependence explicitly. Graphics are shown for some incident angles compared with the Harrington's solution. Thus, the results are obtained faster and more reliable with reciprocal rotation. It is expected that when there is another sphere with different properties in the outer sphere, the presence and location of the sphere will be detected faster. In addition, this study leads to use for biomedical applications in the future.

Keywords: scattering, dielectric sphere, oblique incidence, reciprocal rotation

Procedia PDF Downloads 264
797 Prediction of Pounding between Two SDOF Systems by Using Link Element Based On Mathematic Relations and Suggestion of New Equation for Impact Damping Ratio

Authors: Seyed M. Khatami, H. Naderpour, R. Vahdani, R. C. Barros

Abstract:

Many previous studies have been carried out to calculate the impact force and the dissipated energy between two neighboring buildings during seismic excitation, when they collide with each other. Numerical studies are an important part of impact, which several researchers have tried to simulate the impact by using different formulas. Estimation of the impact force and the dissipated energy depends significantly on some parameters of impact. Mass of bodies, stiffness of spring, coefficient of restitution, damping ratio of dashpot and impact velocity are some known and unknown parameters to simulate the impact and measure dissipated energy during collision. Collision is usually shown by force-displacement hysteresis curve. The enclosed area of the hysteresis loop explains the dissipated energy during impact. In this paper, the effect of using different types of impact models is investigated in order to calculate the impact force. To increase the accuracy of impact model and to optimize the results of simulations, a new damping equation is assumed and is validated to get the best results of impact force and dissipated energy, which can show the accuracy of suggested equation of motion in comparison with other formulas. This relation is called "n-m". Based on mathematical relation, an initial value is selected for the mentioned coefficients and kinetic energy loss is calculated. After each simulation, kinetic energy loss and energy dissipation are compared with each other. If they are equal, selected parameters are true and, if not, the constant of parameters are modified and a new analysis is performed. Finally, two unknown parameters are suggested to estimate the impact force and calculate the dissipated energy.

Keywords: impact force, dissipated energy, kinetic energy loss, damping relation

Procedia PDF Downloads 523