Search results for: SIFT
10 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan
Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu
Abstract:
This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model
Procedia PDF Downloads 1659 Urinary Volatile Organic Compound Testing in Fast-Track Patients with Suspected Colorectal Cancer
Authors: Godwin Dennison, C. E. Boulind, O. Gould, B. de Lacy Costello, J. Allison, P. White, P. Ewings, A. Wicaksono, N. J. Curtis, A. Pullyblank, D. Jayne, J. A. Covington, N. Ratcliffe, N. K. Francis
Abstract:
Background: Colorectal symptoms are common but only infrequently represent serious pathology, including colorectal cancer (CRC). A large number of invasive tests are presently performed for reassurance. We investigated the feasibility of urinary volatile organic compound (VOC) testing as a potential triage tool in patients fast-tracked for assessment for possible CRC. Methods: A prospective, multi-centre, observational feasibility study was performed across three sites. Patients referred on NHS fast-track pathways for potential CRC provided a urine sample which underwent Gas Chromatography Mass Spectrometry (GC-MS), Field Asymmetric Ion Mobility Spectrometry (FAIMS) and Selected Ion Flow Tube Mass Spectrometry (SIFT-MS) analysis. Patients underwent colonoscopy and/or CT colonography and were grouped as either CRC, adenomatous polyp(s), or controls to explore the diagnostic accuracy of VOC output data supported by an artificial neural network (ANN) model. Results: 558 patients participated with 23 (4.1%) CRC diagnosed. 59% of colonoscopies and 86% of CT colonographies showed no abnormalities. Urinary VOC testing was feasible, acceptable to patients, and applicable within the clinical fast track pathway. GC-MS showed the highest clinical utility for CRC and polyp detection vs. controls (sensitivity=0.878, specificity=0.882, AUROC=0.884). Conclusion: Urinary VOC testing and analysis are feasible within NHS fast-track CRC pathways. Clinically meaningful differences between patients with cancer, polyps, or no pathology were identified therefore suggesting VOC analysis may have future utility as a triage tool. Acknowledgment: Funding: NIHR Research for Patient Benefit grant (ref: PB-PG-0416-20022).Keywords: colorectal cancer, volatile organic compound, gas chromatography mass spectrometry, field asymmetric ion mobility spectrometry, selected ion flow tube mass spectrometry
Procedia PDF Downloads 898 Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language
Authors: Stephen Wagman
Abstract:
This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles.Keywords: development of the argument, lists, noun structures, subordination
Procedia PDF Downloads 2467 Role of Estrogen Receptor-alpha in Mammary Carcinoma by Single Nucleotide Polymorphisms and Molecular Docking: An In-silico Analysis
Authors: Asif Bilal, Fouzia Tanvir, Sibtain Ahmad
Abstract:
Estrogen receptor alpha, also known as estrogen receptor-1, is highly involved in risk of mammary carcinoma. The objectives of this study were to identify non-synonymous SNPs of estrogen receptor and their association with breast cancer and to identify the chemotherapeutic responses of phytochemicals against it via in-silico study design. For this purpose, different online tools. to identify pathogenic SNPs the tools were SIFT, Polyphen, Polyphen-2, fuNTRp, SNAP2, for finding disease associated SNPs the tools SNP&GO, PhD-SNP, PredictSNP, MAPP, SNAP, MetaSNP, PANTHER, and to check protein stability Mu-Pro, I-Mutant, and CONSURF were used. Post-translational modifications (PTMs) were detected by Musitedeep, Protein secondary structure by SOPMA, protein to protein interaction by STRING, molecular docking by PyRx. Seven SNPs having rsIDs (rs760766066, rs779180038, rs956399300, rs773683317, rs397509428, rs755020320, and rs1131692059) showing mutations on I229T, R243C, Y246H, P336R, Q375H, R394S, and R394H, respectively found to be completely deleterious. The PTMs found were 96 times Glycosylation; 30 times Ubiquitination, a single time Acetylation; and no Hydroxylation and Phosphorylation were found. The protein secondary structure consisted of Alpha helix (Hh) is (28%), Extended strand (Ee) is (21%), Beta turn (Tt) is 7.89% and Random coil (Cc) is (44.11%). Protein-protein interaction analysis revealed that it has strong interaction with Myeloperoxidase, Xanthine dehydrogenase, carboxylesterase 1, Glutathione S-transferase Mu 1, and with estrogen receptors. For molecular docking we used Asiaticoside, Ilekudinuside, Robustoflavone, Irinoticane, Withanolides, and 9-amin0-5 as ligands that extract from phytochemicals and docked with this protein. We found that there was great interaction (from -8.6 to -9.7) of these ligands of phytochemicals at ESR1 wild and two mutants (I229T and R394S). It is concluded that these SNPs found in ESR1 are involved in breast cancer and given phytochemicals are highly helpful against breast cancer as chemotherapeutic agents. Further in vitro and in vivo analysis should be performed to conduct these interactions.Keywords: breast cancer, ESR1, phytochemicals, molecular docking
Procedia PDF Downloads 686 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3315 In Silico Analysis of Deleterious nsSNPs (Missense) of Dihydrolipoamide Branched-Chain Transacylase E2 Gene Associated with Maple Syrup Urine Disease Type II
Authors: Zainab S. Ahmed, Mohammed S. Ali, Nadia A. Elshiekh, Sami Adam Ibrahim, Ghada M. El-Tayeb, Ahmed H. Elsadig, Rihab A. Omer, Sofia B. Mohamed
Abstract:
Maple syrup urine (MSUD) is an autosomal recessive disease that causes a deficiency in the enzyme branched-chain alpha-keto acid (BCKA) dehydrogenase. The development of disease has been associated with SNPs in the DBT gene. Despite that, the computational analysis of SNPs in coding and noncoding and their functional impacts on protein level still remains unknown. Hence, in this study, we carried out a comprehensive in silico analysis of missense that was predicted to have a harmful influence on DBT structure and function. In this study, eight different in silico prediction algorithms; SIFT, PROVEAN, MutPred, SNP&GO, PhD-SNP, PANTHER, I-Mutant 2.0 and MUpo were used for screening nsSNPs in DBT including. Additionally, to understand the effect of mutations in the strength of the interactions that bind protein together the ELASPIC servers were used. Finally, the 3D structure of DBT was formed using Mutation3D and Chimera servers respectively. Our result showed that a total of 15 nsSNPs confirmed by 4 software (R301C, R376H, W84R, S268F, W84C, F276C, H452R, R178H, I355T, V191G, M444T, T174A, I200T, R113H, and R178C) were found damaging and can lead to a shift in DBT gene structure. Moreover, we found 7 nsSNPs located on the 2-oxoacid_dh catalytic domain, 5 nsSNPs on the E_3 binding domain and 3 nsSNPs on the Biotin Domain. So these nsSNPs may alter the putative structure of DBT’s domain. Furthermore, we detected all these nsSNPs are on the core residues of the protein and have the ability to change the stability of the protein. Additionally, we found W84R, S268F, and M444T have high significance, and they affected Leucine, Isoleucine, and Valine, which reduces or disrupt the function of BCKD complex, E2-subunit which the DBT gene encodes. In conclusion, based on our extensive in-silico analysis, we report 15 nsSNPs that have possible association with protein deteriorating and disease-causing abilities. These candidate SNPs can aid in future studies on Maple Syrup Urine Disease type II base in the genetic level.Keywords: DBT gene, ELASPIC, in silico analysis, UCSF chimer
Procedia PDF Downloads 2014 Postmortem Genetic Testing to Sudden and Unexpected Deaths Using the Next Generation Sequencing
Authors: Eriko Ochiai, Fumiko Satoh, Keiko Miyashita, Yu Kakimoto, Motoki Osawa
Abstract:
Sudden and unexpected deaths from unknown causes occur in infants and youths. Recently, molecular links between a part of these deaths and several genetic diseases are examined in the postmortem. For instance, hereditary long QT syndrome and Burgada syndrome are occasionally fatal through critical ventricular tachyarrhythmia. There are a large number of target genes responsible for such diseases, the conventional analysis using the Sanger’s method has been laborious. In this report, we attempted to analyze sudden deaths comprehensively using the next generation sequencing (NGS) technique. Multiplex PCR to subject’s DNA was performed using Ion AmpliSeq Library Kits 2.0 and Ion AmpliSeq Inherited Disease Panel (Life Technologies). After the library was constructed by emulsion PCR, the amplicons were sequenced 500 flows on Ion Personal Genome Machine System (Life Technologies) according to the manufacture instruction. SNPs and indels were analyzed to the sequence reads that were mapped on hg19 of reference sequences. This project has been approved by the ethical committee of Tokai University School of Medicine. As a representative case, the molecular analysis to a 40 years old male who received a diagnosis of Brugada syndrome demonstrated a total of 584 SNPs or indels. Non-synonymous and frameshift nucleotide substitutions were selected in the coding region of heart disease related genes of ANK2, AKAP9, CACNA1C, DSC2, KCNQ1, MYLK, SCN1B, and STARD3. In particular, c.629T-C transition in exon 3 of the SCN1B gene, resulting in a leu210-to-pro (L210P) substitution is predicted “damaging” by the SIFT program. Because the mutation has not been reported, it was unclear if the substitution was pathogenic. Sudden death that failed in determining the cause of death constitutes one of the most important unsolved subjects in forensic pathology. The Ion AmpliSeq Inherited Disease Panel can amplify the exons of 328 genes at one time. We realized the difficulty in selection of the true source from a number of candidates, but postmortem genetic testing using NGS analysis deserves of a diagnostic to date. We now extend this analysis to SIDS suspected subjects and young sudden death victims.Keywords: postmortem genetic testing, sudden death, SIDS, next generation sequencing
Procedia PDF Downloads 3583 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 282 Development of a Miniature Laboratory Lactic Goat Cheese Model to Study the Expression of Spoilage by Pseudomonas Spp. In Cheeses
Authors: Abirami Baleswaran, Christel Couderc, Loubnah Belahcen, Jean Dayde, Hélène Tormo, Gwénaëlle Jard
Abstract:
Cheeses are often reported to be spoiled by Pseudomonas spp., responsible for defects in appearance, texture, taste, and smell, leading to their non-marketing and even their destruction. Despite preventive actions, problems linked to Pseudomonas spp. are difficult to control by the lack of knowledge and control of these contaminants during the cheese manufacturing. Lactic goat cheese producers are not spared by this problem and are looking for solutions to decrease the number of spoiled cheeses. To explore different hypotheses, experiments are needed. However, cheese-making experiments at the pilot scale are expensive and time consuming. Thus, there is a real need to develop a miniature cheeses model system under controlled conditions. In a previous study, several miniature cheese models corresponding to different type of commercial cheeses have been developed for different purposes. The models were, for example, used to study the influence of milk, starters cultures, pathogen inhibiting additives, enzymatic reactions, microflora, freezing process on cheese. Nevertheless, no miniature model was described on the lactic goat cheese. The aim of this work was to develop a miniature cheese model system under controlled laboratory conditions which resembles commercial lactic goat cheese to study Pseudomonas spp. spoilage during the manufacturing and ripening process. First, a protocol for the preparation of miniature cheeses (3.5 times smaller than a commercial one) was designed based on the cheese factorymanufacturing process. The process was adapted from “Rocamadour” technology and involves maturation of pasteurized milk, coagulation, removal of whey by centrifugation, moulding, and ripening in a little scale cellar. Microbiological (total bacterial count, yeast, molds) and physicochemical (pH, saltinmoisture, moisture in fat-free)analyses were performed on four key stages of the process (before salting, after salting, 1st day of ripening, and end of ripening). Factory and miniature cheeses volatilomewere also obtained after full scan Sift-MS cheese analysis. Then, Pseudomonas spp. strains isolated from contaminated cheeses were selected on their origin, their ability to produce pigments, and their enzymatic activities (proteolytic, lecithinasic, and lipolytic). Factory and miniature curds were inoculated by spotting selected strains on the cheese surface. The expression of cheese spoilage was evaluated by counting the level of Pseudomonas spp. during the ripening and by visual observation and under UVlamp. The physicochemical and microbiological compositions of miniature cheeses permitted to assess that miniature process resembles factory process. As expected, differences involatilomes were observed, probably due to the fact that miniature cheeses are made usingpasteurized milk to better control the microbiological conditions and also because the little format of cheese induced probably a difference during the ripening even if the humidity and temperature in the cellar were quite similar. The spoilage expression of Pseudomonas spp. was observed in miniature and factory cheeses. It confirms that the proposed model is suitable for the preparation of miniature cheese specimens in the spoilage study of Pseudomonas spp. in lactic cheeses. This kind of model could be deployed for other applications and other type of cheese.Keywords: cheese, miniature, model, pseudomonas spp, spoilage
Procedia PDF Downloads 1331 The Integration of Apps for Communicative Competence in English Teaching
Authors: L. J. de Jager
Abstract:
In the South African English school curriculum, one of the aims is to achieve communicative competence, the knowledge of using language competently and appropriately in a speech community. Communicatively competent speakers should not only produce grammatically correct sentences but also produce contextually appropriate sentences for various purposes and in different situations. As most speakers of English are non-native speakers, achieving communicative competence remains a complex challenge. Moreover, the changing needs of society necessitate not merely language proficiency, but also technological proficiency. One of the burning issues in the South African educational landscape is the replacement of the standardised literacy model by the pedagogy of multiliteracies that incorporate, by default, the exploration of technological text forms that are part of learners’ everyday lives. It foresees learners as decoders, encoders, and manufacturers of their own futures by exploiting technological possibilities to constantly create and recreate meaning. As such, 21st century learners will feel comfortable working with multimodal texts that are intrinsically part of their lives and by doing so, become authors of their own learning experiences while teachers may become agents supporting learners to discover their capacity to acquire new digital skills for the century of multiliteracies. The aim is transformed practice where learners use their skills, ideas, and knowledge in new contexts. This paper reports on a research project on the integration of technology for language learning, based on the technological pedagogical content knowledge framework, conceptually founded in the theory of multiliteracies, and which aims to achieve communicative competence. The qualitative study uses the community of inquiry framework to answer the research question: How does the integration of technology transform language teaching of preservice teachers? Pre-service teachers in the Postgraduate Certificate of Education Programme with English as methodology were purposively selected to source and evaluate apps for teaching and learning English. The participants collaborated online in a dedicated Blackboard module, using discussion threads to sift through applicable apps and develop interactive lessons using the Apps. The selected apps were entered on to a predesigned Qualtrics form. Data from the online discussions, focus group interviews, and reflective journals were thematically and inductively analysed to determine the participants’ perceptions and experiences when integrating technology in lesson design and the extent to which communicative competence was achieved when using these apps. Findings indicate transformed practice among participants and research team members alike with a better than average technology acceptance and integration. Participants found value in online collaboration to develop and improve their own teaching practice by experiencing directly the benefits of integrating e-learning into the teaching of languages. It could not, however, be clearly determined whether communicative competence was improved. The findings of the project may potentially inform future e-learning activities, thus supporting student learning and development in follow-up cycles of the project.Keywords: apps, communicative competence, English teaching, technology integration, technological pedagogical content knowledge
Procedia PDF Downloads 163