Search results for: multilocus sequence typing
154 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 115153 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution
Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda
Abstract:
This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation
Procedia PDF Downloads 147152 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 104151 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270150 Physicians’ Knowledge and Perception of Gene Profiling in Malaysia: A Pilot Study
Authors: Farahnaz Amini, Woo Yun Kin, Lazwani Kolandaiveloo
Abstract:
Availability of different genetic tests after completion of Human Genome Project increases the physicians’ responsibility to keep themselves update on the potential implementation of these genetic tests in their daily practice. However, due to numbers of barriers, still many of physicians are not either aware of these tests or are not willing to offer or refer their patients for genetic tests. This study was conducted an anonymous, cross-sectional, mailed-based survey to develop a primary data of Malaysian physicians’ level of knowledge and perception of gene profiling. Questionnaire had 29 questions. Total scores on selected questions were used to assess the level of knowledge. The highest possible score was 11. Descriptive statistics, one way ANOVA and chi-squared test was used for statistical analysis. Sixty three completed questionnaires was returned by 27 general practitioners (GPs) and 36 medical specialists. Responders’ age range from 24 to 55 years old (mean 30.2 ± 6.4). About 40% of the participants rated themselves as having poor level of knowledge in genetics in general whilst 60% believed that they have fair level of knowledge. However, almost half (46%) of the respondents felt that they were not knowledgeable about available genetic tests. A majority (94%) of the responders were not aware of any lab or company which is offering gene profiling services in Malaysia. Only 4% of participants were aware of using gene profiling for detection of dosage of some drugs. Respondents perceived greater utility of gene profiling for breast cancer (38%) compared to the colorectal familial cancer (3%). The score of knowledge ranged from 2 to 8 (mean 4.38 ± 1.67). Non-significant differences between score of knowledge of GPs and specialists were observed, with score of 4.19 and 4.58 respectively. There was no significant association between any demographic factors and level of knowledge. However, those who graduated between years 2001 to 2005 had higher level of knowledge. Overall, 83% of participants showed relatively high level of perception on value of gene profiling to detect patient’s risk of disease. However, low perception was observed for both statements of using gene profiling for general population in order to alter their lifestyle (25%) as well as having the full sequence of a patient genome for the purpose of determining a patient’s best match for treatment (18%). The lack of clinical guidelines, limited provider knowledge and awareness, lack of time and resources to educate patients, lack of evidence-based clinical information and cost of tests were the most barriers of ordering gene profiling mentioned by physicians. In conclusion Malaysian physicians who participate in this study had mediocre level of knowledge and awareness in gene profiling. The low exposure to the genetic questions and problems might be a key predictor of lack of awareness and knowledge on available genetic tests. Educational and training workshop might be useful in helping Malaysian physicians incorporate genetic profiling into practice for eligible patients.Keywords: gene profiling, knowledge, Malaysia, physician
Procedia PDF Downloads 326149 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169148 Time of Death Determination in Medicolegal Death Investigations
Authors: Michelle Rippy
Abstract:
Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic
Procedia PDF Downloads 118147 Developing Communicative Skills in Foreign Languages by Video Tasks
Authors: Ekaterina G. Lipatova
Abstract:
The developing potential of a video task in teaching foreign languages involves the opportunities to improve four aspects of speech production process: listening, reading, speaking and writing. A video represents the sequence of actions, realized in the pictures logically connected and verbalized speech flow that simplifies and stimulates the process of perception. In this connection listening skills of students are developed effectively as well as their intellectual properties such as synthesizing, analyzing and generalizing the information. In terms of teaching capacity, a video task, in our opinion, is more stimulating than a traditional listening, since it involves the student into the plot of the communicative situation, emotional background and potentially makes them react to the gist in the cognitive and communicative ways. To be an effective method of teaching the video task should be structured in the way of psycho-linguistic characteristics of speech production process, in other words, should include three phases: before-watching, while-watching and after-watching. The system of tasks provided to each phase might involve the situations on reflecting to the video content in the forms of filling-the-gap tasks, multiple choice, True-or-False tasks (reading skills), exercises on expressing the opinion, project fulfilling (writing and speaking skills). In the before-watching phase we offer the students to adjust their perception mechanism to the topic and the problem of the chosen video by such task as “what do you know about such a problem?”, “is it new for you?”, “have you ever faced the situation of…?”. Then we proceed with the lexical and grammatical analysis of language units that form the body of a speech sample to lessen the perception and develop the student’s lexicon. The goal of while-watching phase is to build the student’s awareness about the problem presented in the video and challenge their inner attitude towards what they have seen by identifying the mistakes in the statements about the video content or making the summary, justifying their understanding. Finally, we move on to development of their speech skills within the communicative situation they observed and learnt by stimulating them to search the similar ideas in their backgrounds and represent them orally or in the written form or express their own opinion on the problem. It is compulsory to highlight, that a video task should contain the urgent, valid and interesting event related to the future profession of the student, since it will help to activate cognitive, emotional, verbal and ethic capacity of students. Also, logically structured video tasks are easily integrated into the system of e-learning and can provide the opportunity for the students to work with the foreign language on their own.Keywords: communicative situation, perception mechanism, speech production process, speech skills
Procedia PDF Downloads 245146 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 154145 A Novel Chicken W Chromosome Specific Tandem Repeat
Authors: Alsu F. Saifitdinova, Alexey S. Komissarov, Svetlana A. Galkina, Elena I. Koshel, Maria M. Kulak, Stephen J. O'Brien, Elena R. Gaginskaya
Abstract:
The mystery of sex determination is one of the most ancient and still not solved until the end so far. In many species, sex determination is genetic and often accompanied by the presence of dimorphic sex chromosomes in the karyotype. Genomic sequencing gave the information about the gene content of sex chromosomes which allowed to reveal their origin from ordinary autosomes and to trace their evolutionary history. Female-specific W chromosome in birds as well as mammalian male-specific Y chromosome is characterized by the degeneration of gene content and the accumulation of repetitive DNA. Tandem repeats complicate the analysis of genomic data. Despite the best efforts chicken W chromosome assembly includes only 1.2 Mb from expected 55 Mb. Supplementing the information on the sex chromosome composition not only helps to complete the assembly of genomes but also moves us in the direction of understanding of the sex-determination systems evolution. A whole-genome survey to the assembly Gallus_gallus WASHUC 2.60 was applied for repeats search in assembled genome and performed search and assembly of high copy number repeats in unassembled reads of SRR867748 short reads datasets. For cytogenetic analysis conventional methods of fluorescent in situ hybridization was used for previously cloned W specific satellites and specifically designed directly labeled synthetic oligonucleotide DNA probe was used for bioinformatically identified repetitive sequence. Hybridization was performed with mitotic chicken chromosomes and manually isolated giant meiotic lampbrush chromosomes from growing oocytes. A novel chicken W specific satellite (GGAAA)n which is not co-localizes with any previously described classes of W specific repeats was identified and mapped with high resolution. In the composition of autosomes this repeat units was found as a part of upstream regions of gonad specific protein coding sequences. These findings may contribute to the understanding of the role of tandem repeats in sex specific differentiation regulation in birds and sex chromosome evolution. This work was supported by the postdoctoral fellowships from St. Petersburg State University (#1.50.1623.2013 and #1.50.1043.2014), the grant for Leading Scientific Schools (#3553.2014.4) and the grant from Russian foundation for basic researches (#15-04-05684). The equipment and software of Research Resource Center “Chromas” and Theodosius Dobzhansky Center for Genome Bioinformatics of Saint Petersburg State University were used.Keywords: birds, lampbrush chromosomes, sex chromosomes, tandem repeats
Procedia PDF Downloads 389144 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231143 Evolutionary Analysis of Influenza A (H1N1) Pdm 09 in Post Pandemic Period in Pakistan
Authors: Nazish Badar
Abstract:
In early 2009, Pandemic type A (H1N1) Influenza virus emerged globally. Since then, it has continued circulation causing considerable morbidity and mortality. The purpose of this study was to evaluate the evolutionary changes in Influenza A (H1N1) pdm09 viruses from 2009-15 and their relevance with the current vaccine viruses. Methods: Respiratory specimens were collected with influenza-like illness and Severe Acute Respiratory Illness. Samples were processed according to CDC protocol. Sequencing and phylogenetic analysis of Haemagglutinin (HA) and neuraminidase (NA) genes was carried out comparing representative isolates from Pakistan viruses. Results: Between Jan2009 - Feb 2016, 1870 (13.2%) samples were positive for influenza A out of 14086. During the pandemic period (2009–10), Influenza A/ H1N1pdm 09 was the dominant strain with 366 (45%) of total influenza positives. In the post-pandemic period (2011–2016), a total of 1066 (59.6%) cases were positive Influenza A/ H1N1pdm 09 with co-circulation of different Influenza A subtypes. Overall, the Pakistan A(H1N1) pdm09 viruses grouped in two genetic clades. Influenza A(H1N1)pdm09 viruses only ascribed to Clade 7 during the pandemic period whereas viruses belong to clade 7 (2011) and clade 6B (2015) during the post-pandemic years. Amino acid analysis of the HA gene revealed mutations at positions S220T, I338V and P100S specially associated with outbreaks in all the analyzed strains. Sequence analyses of post-pandemic A(H1N1)pdm09 viruses showed additional substitutions at antigenic sites; S179N,K180Q (SA), D185N, D239G (CA), S202A (SB) and at receptor binding sites; A13T, S200P when compared with pandemic period. Substitution at Genetic markers; A273T (69%), S200P/T (15%) and D239G (7.6%) associated with severity and E391K (69%) associated with virulence was identified in viruses isolated during 2015. Analysis of NA gene revealed outbreak markers; V106I (23%) among pandemic and N248D (100%) during post-pandemic Pakistan viruses. Additional N-Glycosylation site; HA S179N (23%), NA I23T(7.6%) and N44S (77%) in place of N386K(77%) were only found in post-pandemic viruses. All isolates showed histidine (H) at position 275 in NA indicating sensitivity to neuraminidase inhibitors. Conclusion: This study shows that the Influenza A(H1N1)pdm09 viruses from Pakistan clustered into two genetic clades, with co-circulation of some variants. Certain key substitutions in the receptor binding site and few changes indicative of virulence were also detected in post-pandemic strains. Therefore, it is imperative to continue monitoring of the viruses for early identification of potential variants of high virulence or emergence of drug-resistant variants.Keywords: Influenza A (H1N1) pdm09, evolutionary analysis, post pandemic period, Pakistan
Procedia PDF Downloads 207142 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 316141 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill
Procedia PDF Downloads 362140 Preliminary Analysis on the Distribution of Elements in Cannabis
Authors: E. Zafeiraki, P. Nisianakis, K. Machera
Abstract:
Cannabis plant contains 113 cannabinoids and it is commonly known for its psychoactive substance tetrahydrocannabinol or as a source of narcotic substances. The recent years’ cannabis cultivation also increases due to its wide use both for medical and industrial purposes as well as for uses as para-pharmaceuticals, cosmetics and food commodities. Depending on the final product, different parts of the plant are utilized, with the leaves and bud (seeds) being the most frequently used. Cannabis can accumulate various contaminants, including heavy metals, both from the soil and the water in which the plant grows. More specifically, metals may occur naturally in the soil and water, or they can enter into the environment through fertilizers, pesticides and fungicides that are commonly applied to crops. The high probability of metals accumulation in cannabis, combined with the latter growing use, raise concerns about the potential health effects in humans and consequently lead to the need for the implementation of safety measures for cannabis products, such as guidelines for regulating contaminants, including metals, and especially the ones characterized by high toxicity in cannabis. Acknowledging the above, the aim of the current study was first to investigate metals contamination in cannabis samples collected from Greece, and secondly to examine potential differences in metals accumulation among the different parts of the plant. To our best knowledge, this is the first study presenting information on elements in cannabis cultivated in Greece, and also on the distribution pattern of the former in the plant body. To this end, the leaves and the seeds of all the samples were initially separated and dried and then digested with Nitric acid (HNO₃) and Hydrochloric acid (HCl). For the analysis of these samples, an Inductive Coupled Plasma-Mass Spectrometry (ICP-MS) method was developed, able to quantify 28 elements. Internal standards were added at a constant rate and concentration to all calibration standards and unknown samples, while two certified reference materials were analyzed in every batch to ensure the accuracy of the measurements. The repeatability of the method and the background contamination were controlled by the analysis of quality control (QC) standards and blank samples in every sequence, respectively. According to the results, essential metals, such as Ca, Zn and Mg, were detected at high levels. On the contrary, the concentration of high toxicity metals, like As (average: 0.10ppm), Pb (average: 0.36ppm), Cd (average: 0.04ppm), and Hg (average: 0.012ppm) were very low in all the samples, indicating that no harmful effects on human health can be caused by the analyzed samples. Moreover, it appears that the pattern of contamination of metals is very similar in all the analyzed samples, which could be attributed to the same origin of the analyzed cannabis, i.e., the common soil composition, use of fertilizers, pesticides, etc. Finally, as far as the distribution pattern between the different parts of the plant is concerned, it was revealed that leaves present a higher concentration in comparison to seeds for all metals examined.Keywords: cannabis, heavy metals, ICP-MS, leaves and seeds, elements
Procedia PDF Downloads 99139 Nuclear Near Misses and Their Learning for Healthcare
Authors: Nick Woodier, Iain Moppett
Abstract:
Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.Keywords: culture, definitions, near miss, nuclear safety, patient safety
Procedia PDF Downloads 104138 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 257137 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128136 Evaluation of Iron Application Method to Remediate Coastal Marine Sediment
Authors: Ahmad Seiar Yasser
Abstract:
Sediment is an important habitat for organisms and act as a store house for nutrients in aquatic ecosystems. Hydrogen sulfide is produced by microorganisms in the water columns and sediments, which is highly toxic and fatal to benthic organisms. However, the irons have the capacity to regulate the formation of sulfide by poising the redox sequence and to form insoluble iron sulfide and pyrite compounds. Therefore, we conducted two experiments aimed to evaluate the remediation efficiency of iron application to organically enrich and improve sediments environment. Experiments carried out in the laboratory using intact sediment cores taken from Mikawa Bay, Japan at every month from June to September 2017 and October 2018. In Experiment 1, after cores were collected, the iron powder or iron hydroxide were applied to the surface sediment with 5 g/ m2 or 5.6 g/ m2, respectively. In Experiment 2, we experimentally investigated the removal of hydrogen sulfide using (2mm or less and 2 to 5mm) of the steelmaking slag. Experiments are conducted both in the laboratory with the same boundary conditions. The overlying water were replaced with deoxygenated filtered seawater, and cores were sealed a top cap to keep anoxic condition with a stirrer to circulate the overlying water gently. The incubation experiments have been set in three treatments included the control, and each treatment replicated and were conducted with the same temperature of the in-situ conditions. Water samples were collected to measure the dissolved sulfide concentrations in the overlying water at appropriate time intervals by the methylene blue method. Sediment quality was also analyzed after the completion of the experiment. After the 21 days incubation, experimental results using iron powder and ferric hydroxide revealed that application of these iron containing materials significantly reduced sulfide release flux from the sediment into the overlying water. The average dissolved sulfides concentration in the overlying water of the treatment group was significantly decrease (p = .0001). While no significant difference was observed between the control group after 21 day incubation. Therefore, the application of iron to the sediment is a promising method to remediate contaminated sediments in a eutrophic water body, although ferric hydroxide has better hydrogen sulfide removal effects. Experiments using the steelmaking slag also clarified the fact that capping with (2mm or less and 2 to 5mm) of slag steelmaking is an effective technique for remediation of bottom sediments enriched organic containing hydrogen sulfide because it leads to the induction of chemical reaction between Fe and sulfides occur in sediments which did not occur in conditions naturally. Although (2mm or less) of slag steelmaking has better hydrogen sulfide removal effects. Because of economic reasons, the application of steelmaking slag to the sediment is a promising method to remediate contaminated sediments in the eutrophic water body.Keywords: sedimentary, H2S, iron, iron hydroxide
Procedia PDF Downloads 163135 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening
Authors: Jaroslaw Gawryluk, Andrzej Teter
Abstract:
Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: angle column, compression, experiment, FEM
Procedia PDF Downloads 206134 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286133 Morpho-Agronomic Response to Water Stress of Some Nigerian Bambara Groundnut (Vigna Subterranea (L.) Verdc.) Germplasm and Genetic Diversity Studies of Some Selected Accessions Using Ssr Markers
Authors: Abejide Dorcas Ropo, , Falusi Olamide Ahmed, Daudu Oladipupo Abdulazeez Yusuf, Salihu Bolaji Zuluquri Neen, Muhammad Liman Muhammad, Gado Aishatu Adamu
Abstract:
Water stress is a major factor limiting the productivity of crops in the world today. This study evaluated the morpho-agronomic response of twenty-four (24) Nigerian Bambara groundnut landraces to water stress and genetic diversity of some selected accessions using SSR markers. The studies was carried out in the Botanical garden of the Department of Plant Biology, Federal University of Technology, Minna, Niger State, Nigeria in a randomized complete block design using three replicates. Molecular analysis using SSR primers was carried out at the Centre for Bio- Science, International Institute of Tropical Agriculture (IITA) Ibadan, Nigeria in order to characterize ten selected accessions comprising of the seven most drought tolerant and the three most susceptible accessions detected from the morpho-agronomic studies. Results revealed that water stress decreased morpho-agronomic traits such as plant height, leaf area, number of leaves per plant and seed yield etc. A total of 22 alleles were detected by the SSR markers used with a mean number of 4 allelles. Simple Sequence Repeat (SSR) markers MBamCO33, Primer 65 and G358B2-D15 each detected 4 allelles while Primer 3FR and 4FR detected 5 allelles each. The study revealed significantly high polymorphisms in 10 Loci. The mean value of Polymorpic information content was 0.6997 implying the usefulness of the primers used in identifying genetic similarities and differences among the Bambara groundnut genotypes. The SSR analysis revealed a comparable pattern between genetic diversity and drought tolerance of the genotypes. The Unweighted Paired Group Method with Arithmethic Mean (UPGMA) dendrogram showed that at a genetic distance of 0.1, the accessions were grouped into three groups according to their level of tolerance to drought. The two most drought tolerant accessions were grouped together and the 5th and 6th most drought tolerant accession were also grouped together. This suggests that the genotypes grouped together may be genetically close, may possess similar genes or have a common origin. The degree of genetic variants obtained could be useful in bambara groundnut breeding for drought tolerance. The identified drought tolerant bambara groundnut landraces are important genetic resources for drought stress tolerance breeding programme of bambara groundnut. The genotypes are also useful for germplasm conservation and global implications.Keywords: bambara groundnut, genetic diversity, germplasm, SSR markers, water stress
Procedia PDF Downloads 20132 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 133131 New Bio-Strategies for Ochratoxin a Detoxification Using Lactic Acid Bacteria
Authors: José Maria, Vânia Laranjo, Luís Abrunhosa, António Inês
Abstract:
The occurrence of mycotoxigenic moulds such as Aspergillus, Penicillium and Fusarium in food and feed has an important impact on public health, by the appearance of acute and chronic mycotoxicoses in humans and animals, which is more severe in the developing countries due to lack of food security, poverty and malnutrition. This mould contamination also constitutes a major economic problem due the lost of crop production. A great variety of filamentous fungi is able to produce highly toxic secondary metabolites known as mycotoxins. Most of the mycotoxins are carcinogenic, mutagenic, neurotoxic and immunosuppressive, being ochratoxin A (OTA) one of the most important. OTA is toxic to animals and humans, mainly due to its nephrotoxic properties. Several approaches have been developed for decontamination of mycotoxins in foods, such as, prevention of contamination, biodegradation of mycotoxins-containing food and feed with microorganisms or enzymes and inhibition or absorption of mycotoxin content of consumed food into the digestive tract. Some group of Gram-positive bacteria named lactic acid bacteria (LAB) are able to release some molecules that can influence the mould growth, improving the shelf life of many fermented products and reducing health risks due to exposure to mycotoxins. Some LAB are capable of mycotoxin detoxification. Recently our group was the first to describe the ability of LAB strains to biodegrade OTA, more specifically, Pediococcus parvulus strains isolated from Douro wines. The pathway of this biodegradation was identified previously in other microorganisms. OTA can be degraded through the hydrolysis of the amide bond that links the L-β-phenylalanine molecule to the ochratoxin alpha (OTα) a non toxic compound. It is known that some peptidases from different origins can mediate the hydrolysis reaction like, carboxypeptidase A an enzyme from the bovine pancreas, a commercial lipase and several commercial proteases. So, we wanted to have a better understanding of this OTA degradation process when LAB are involved and identify which molecules where present in this process. For achieving our aim we used some bioinformatics tools (BLAST, CLUSTALX2, CLC Sequence Viewer 7, Finch TV). We also designed specific primers and realized gene specific PCR. The template DNA used came from LAB strains samples of our previous work, and other DNA LAB strains isolated from elderberry fruit, silage, milk and sausages. Through the employment of bioinformatics tools it was possible to identify several proteins belonging to the carboxypeptidase family that participate in the process of OTA degradation, such as serine type D-Ala-D-Ala carboxypeptidase and membrane carboxypeptidase. In conclusions, this work has identified carboxypeptidase proteins being one of the molecules present in the OTA degradation process when LAB are involved.Keywords: carboxypeptidase, lactic acid bacteria, mycotoxins, ochratoxin a.
Procedia PDF Downloads 462130 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 112129 Reproductive Behavior of the Red Sea Immigrant Lagocephalus sceleratus (Gmelin, 1789) from the Mediterranean Coast, Egypt
Authors: Mahmoud M. S. Farrag, Alaa A. K. Elhaweet, El-Sayed Kh. A. Akel, Mohsen A. Moustafa
Abstract:
The present work aimed to study the reproductive strategy of the common lessepsian puffer fish Lagocephalus sceleratus (Gmelin, 1879) from the Egyptian Mediterranean Waters. It is a famous migratory species plays an important role in the field of fisheries and ecology of aquatic ecosystem. The specimens were collected monthly from the landing centers along the Egyptian Mediterranean coast during 2012. Six maturity stages were recorded: (I) Thread like stage, (II) Immature stage (Virgin stage), (III) Maturing stage (Developing Virgin and recovering spent), (IV) Nearly ripe stage, (V) Fully ripe, (VI) Spawning stage, (VII) Spent stage. According to sex ratio, males exhibited higher number than females representing 52.44 % of the total fishes with sex ratio 1: 0.91. Fish length corresponding to 50% maturation was 38.5 cm for males and 41 cm for females. The corresponding ages (age at first maturity) are equal to 2.14 and 2.27 years for male and female respectively. The ova diameter ranged from 0.02mm to 0.85mm, the mature ova ranged from 0.16mm to 0.85mm and showed progressive increase from April towards September. Also, the presences of ova diameter in one peak of mature and ripe eggs in the ovaries were observed during spawning period. The relationship between gutted weight and absolute fecundity indicated that that fecundity increased as the fish grew in weight. The absolute fecundity ranged from 260288 to 2372931 for fish weight ranged from 698 to 3285 cm with an average of 1449522±720975. The relative fecundity ranged from 373 to 722 for fish weight ranged from 698 to 3285 cm with an average of 776±231. The spawning season of L. sceleratus was investigated from the data of gonado-somatic index and monthly distribution of maturity stages along the year as well as sequence of ova diameter for mature stages and exhibited a relatively prolong spawning season extending from April for both sexes and ends in August for male while ends in September for female. Fish releases its ripe ova in one batch during the spawning season. Histologically, the ovarian cycle of L. sceleratus was classified into six stages and the testicular cycle into five stages. Histological characters of gonads of L. sceleratus during the year of study had confirmed the previous results of distribution of maturity stages, gonado-somatic index and ova diameter, indicating this fish species has prolonged spawning season from April to September. This species is considered totally or uni spawner with synchronous group as it contained one to two developmental stages at the same gonad.Keywords: Lagocephalus sceleratus, reproductive biology, oogenesis, histology
Procedia PDF Downloads 304128 Damage Tolerance of Composites Containing Hybrid, Carbon-Innegra, Fibre Reinforcements
Authors: Armin Solemanifar, Arthur Wilkinson, Kinjalkumar Patel
Abstract:
Carbon fibre (CF) - polymer laminate composites have very low densities (approximately 40% lower than aluminium), high strength and high stiffness but in terms of toughness properties they often require modifications. For example, adding rubbers or thermoplastics toughening agents are common ways of improving the interlaminar fracture toughness of initially brittle thermoset composite matrices. The main aim of this project was to toughen CF-epoxy resin laminate composites using hybrid CF-fabrics incorporating Innegra™ a commercial highly-oriented polypropylene (PP) fibre, in which more than 90% of its crystal orientation is parallel to the fibre axis. In this study, the damage tolerance of hybrid (carbon-Innegra, CI) composites was investigated. Laminate composites were produced by resin-infusion using: pure CF fabric; fabrics with different ratios of commingled CI, and two different types of pure Innegra fabrics (Innegra 1 and Innegra 2). Dynamic mechanical thermal analysis (DMTA) was used to measure the glass transition temperature (Tg) of the composite matrix and values of flexural storage modulus versus temperature. Mechanical testing included drop-weight impact, compression-after-impact (CAI), and interlaminar (short-beam) shear strength (ILSS). Ultrasonic C-Scan imaging was used to determine the impact damage area and scanning electron microscopy (SEM) to observe the fracture mechanisms that occur during failure of the composites. For all composites, 8 layers of fabrics were used with a quasi-isotropic sequence of [-45°, 0°, +45°, 90°]s. DMTA showed the Tg of all composites to be approximately same (123 ±3°C) and that flexural storage modulus (before the onset of Tg) was the highest for the pure CF composite while the lowest were for the Innegra 1 and 2 composites. Short-beam shear strength of the commingled composites was higher than other composites, while for Innegra 1 and 2 composites only inelastic deformation failure was observed during the short-beam test. During impact, the Innegra 1 composite withstood up to 40 J without any perforation while for the CF perforation occurred at 10 J. The rate of reduction in compression strength upon increasing the impact energy was lowest for the Innegra 1 and 2 composites, while CF showed the highest rate. On the other hand, the compressive strength of the CF composite was highest of all the composites at all impacted energy levels. The predominant failure modes for Innegra composites observed in cross-sections of fractured specimens were fibre pull-out, micro-buckling, and fibre plastic deformation; while fibre breakage and matrix delamination were a major failure observed in the commingled composites due to the more brittle behaviour of CF. Thus, Innegra fibres toughened the CF composites but only at the expense of reducing compressive strength.Keywords: hybrid composite, thermoplastic fibre, compression strength, damage tolerance
Procedia PDF Downloads 295127 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233126 Recognising and Managing Haematoma Following Thyroid Surgery: Simulation Teaching is Effective
Authors: Emily Moore, Dora Amos, Tracy Ellimah, Natasha Parrott
Abstract:
Postoperative haematoma is a well-recognised complication of thyroid surgery with an incidence of 1-5%. Haematoma formation causes progressive airway obstruction, necessitating emergency bedside haematoma evacuation in up to ¼ of patients. ENT UK, BAETS and DAS have developed consensus guidelines to improve perioperative care, recommending that all healthcare staff interacting with patients undergoing thyroid surgery should be trained in managing post-thyroidectomy haematoma. The aim was to assess the effectiveness of a hybrid simulation model in improving clinician’s confidence in dealing with this surgical emergency. A hybrid simulation was designed, consisting of a standardised patient wearing a part-task trainer to mimic a post-thyroidectomy haematoma in a real patient. The part-task trainer was an adapted C-spine collar with layers of silicone representing the skin and strap muscles and thickened jelly representing the haematoma. Both the skin and strap muscle layers had to be opened in order to evacuate the haematoma. Boxes have been implemented into the appropriate post operative areas (recovery and surgical wards), which contain a printed algorithm designed to assist in remembering a sequence of steps for haematoma evacuation using the ‘SCOOP’ method (skin exposure, cut sutures, open skin, open muscles, pack wound) along with all the necessary equipment to open the front of the neck. Small-group teaching sessions were delivered by ENT and anaesthetic trainees to members of the multidisciplinary team normally involved in perioperative patient care, which included ENT surgeons, anaesthetists, recovery nurses, HCAs and ODPs. The DESATS acronym of signs and symptoms to recognise (difficulty swallowing, EWS score, swelling, anxiety, tachycardia, stridor) was highlighted. Then participants took part in the hybrid simulation in order to practice this ‘SCOOP’ method of haematoma evacuation. Participants were surveyed using a Likert scale to assess their level of confidence pre- and post teaching session. 30 clinicians took part. Confidence (agreed/strongly agreed) in recognition of post thyroidectomy haematoma improved from 58.6% to 96.5%. Confidence in management improved from 27.5% to 89.7%. All participants successfully decompressed the haematoma. All participants agreed/strongly agreed, that the sessions were useful for their learning. Multidisciplinary team simulation teaching is effective at significantly improving confidence in both the recognition and management of postoperative haematoma. Hybrid simulation sessions are useful and should be incorporated into training for clinicians.Keywords: thyroid surgery, haematoma, teaching, hybrid simulation
Procedia PDF Downloads 96125 High Impact Biostratigrapgic Study
Abstract:
The re-calibration of the Campanian to Maastritchian of some parts Anambra basin was carried outusing samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and Paleontological analyses werecarried out on 100 ditch cutting samples. The faunal and floral succession were of terrestrialand marine origin as described and logged. The well penetrated four stratigraphic units inAnambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preservedformanifera and palynormorphs. The well yielded 53 species of foram and 69 species ofpalynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 sporesassemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – earlyPaleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst).The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are asfollows. AmamaBiozone A-Globotruncanellahavanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovellavelascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanellahavanensis zone: Late Campanian(1512m) Biozone B-Bolivinaafra, B. explicate zone: Maastrichtian (634–1204m) BiozoneC- Indeterminate (305 – 634m) Palynological Amama-1 A.Ctenolophoniditescostatus zone:Early Maastrichtian (1829m) B-Retidiporitesminiporatus Zone: Late Maastrichtian (1274m)Constructipollenitesineffectus Zone: Early Paleocene(695m) Bara-1 Droseriditessenonicus Zone: Late Campanian (994– 1600m) B. Ctenolophoniditescostatus Zone: EarlyMaastrichtian (713–994m) C. Retidiporitesminiporatus Zone: Late Maastrichtian (305 –713m) The paleo – environment of deposition were determined to range from non-marine toouter netritic. A detailed categorization of the palynormorphs into terrestrially derivedpalynormorphs and marine derived palynormorphs based on the distribution of three broadvegetation types; mangrove, fresh water swamps and hinther land communities were used toevaluate sea level fluctuations with respect to sediments deposited in the basins and linkedwith a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m-1533m and 1581 dated b/w 634m-1387m, dated 69.5ma-82ma and four sequenceboundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application ofecostratigraphic description is characterised by the prominent expansion of the hinterlandcomponent consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.Keywords: formanifera, palynomorphs. campanian, maastritchian, ecostratigraphic anambra
Procedia PDF Downloads 29