Search results for: chaotic sequence denoising
149 Time of Death Determination in Medicolegal Death Investigations
Authors: Michelle Rippy
Abstract:
Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic
Procedia PDF Downloads 118148 Developing Communicative Skills in Foreign Languages by Video Tasks
Authors: Ekaterina G. Lipatova
Abstract:
The developing potential of a video task in teaching foreign languages involves the opportunities to improve four aspects of speech production process: listening, reading, speaking and writing. A video represents the sequence of actions, realized in the pictures logically connected and verbalized speech flow that simplifies and stimulates the process of perception. In this connection listening skills of students are developed effectively as well as their intellectual properties such as synthesizing, analyzing and generalizing the information. In terms of teaching capacity, a video task, in our opinion, is more stimulating than a traditional listening, since it involves the student into the plot of the communicative situation, emotional background and potentially makes them react to the gist in the cognitive and communicative ways. To be an effective method of teaching the video task should be structured in the way of psycho-linguistic characteristics of speech production process, in other words, should include three phases: before-watching, while-watching and after-watching. The system of tasks provided to each phase might involve the situations on reflecting to the video content in the forms of filling-the-gap tasks, multiple choice, True-or-False tasks (reading skills), exercises on expressing the opinion, project fulfilling (writing and speaking skills). In the before-watching phase we offer the students to adjust their perception mechanism to the topic and the problem of the chosen video by such task as “what do you know about such a problem?”, “is it new for you?”, “have you ever faced the situation of…?”. Then we proceed with the lexical and grammatical analysis of language units that form the body of a speech sample to lessen the perception and develop the student’s lexicon. The goal of while-watching phase is to build the student’s awareness about the problem presented in the video and challenge their inner attitude towards what they have seen by identifying the mistakes in the statements about the video content or making the summary, justifying their understanding. Finally, we move on to development of their speech skills within the communicative situation they observed and learnt by stimulating them to search the similar ideas in their backgrounds and represent them orally or in the written form or express their own opinion on the problem. It is compulsory to highlight, that a video task should contain the urgent, valid and interesting event related to the future profession of the student, since it will help to activate cognitive, emotional, verbal and ethic capacity of students. Also, logically structured video tasks are easily integrated into the system of e-learning and can provide the opportunity for the students to work with the foreign language on their own.Keywords: communicative situation, perception mechanism, speech production process, speech skills
Procedia PDF Downloads 245147 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 154146 A Novel Chicken W Chromosome Specific Tandem Repeat
Authors: Alsu F. Saifitdinova, Alexey S. Komissarov, Svetlana A. Galkina, Elena I. Koshel, Maria M. Kulak, Stephen J. O'Brien, Elena R. Gaginskaya
Abstract:
The mystery of sex determination is one of the most ancient and still not solved until the end so far. In many species, sex determination is genetic and often accompanied by the presence of dimorphic sex chromosomes in the karyotype. Genomic sequencing gave the information about the gene content of sex chromosomes which allowed to reveal their origin from ordinary autosomes and to trace their evolutionary history. Female-specific W chromosome in birds as well as mammalian male-specific Y chromosome is characterized by the degeneration of gene content and the accumulation of repetitive DNA. Tandem repeats complicate the analysis of genomic data. Despite the best efforts chicken W chromosome assembly includes only 1.2 Mb from expected 55 Mb. Supplementing the information on the sex chromosome composition not only helps to complete the assembly of genomes but also moves us in the direction of understanding of the sex-determination systems evolution. A whole-genome survey to the assembly Gallus_gallus WASHUC 2.60 was applied for repeats search in assembled genome and performed search and assembly of high copy number repeats in unassembled reads of SRR867748 short reads datasets. For cytogenetic analysis conventional methods of fluorescent in situ hybridization was used for previously cloned W specific satellites and specifically designed directly labeled synthetic oligonucleotide DNA probe was used for bioinformatically identified repetitive sequence. Hybridization was performed with mitotic chicken chromosomes and manually isolated giant meiotic lampbrush chromosomes from growing oocytes. A novel chicken W specific satellite (GGAAA)n which is not co-localizes with any previously described classes of W specific repeats was identified and mapped with high resolution. In the composition of autosomes this repeat units was found as a part of upstream regions of gonad specific protein coding sequences. These findings may contribute to the understanding of the role of tandem repeats in sex specific differentiation regulation in birds and sex chromosome evolution. This work was supported by the postdoctoral fellowships from St. Petersburg State University (#1.50.1623.2013 and #1.50.1043.2014), the grant for Leading Scientific Schools (#3553.2014.4) and the grant from Russian foundation for basic researches (#15-04-05684). The equipment and software of Research Resource Center “Chromas” and Theodosius Dobzhansky Center for Genome Bioinformatics of Saint Petersburg State University were used.Keywords: birds, lampbrush chromosomes, sex chromosomes, tandem repeats
Procedia PDF Downloads 389145 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231144 Evolutionary Analysis of Influenza A (H1N1) Pdm 09 in Post Pandemic Period in Pakistan
Authors: Nazish Badar
Abstract:
In early 2009, Pandemic type A (H1N1) Influenza virus emerged globally. Since then, it has continued circulation causing considerable morbidity and mortality. The purpose of this study was to evaluate the evolutionary changes in Influenza A (H1N1) pdm09 viruses from 2009-15 and their relevance with the current vaccine viruses. Methods: Respiratory specimens were collected with influenza-like illness and Severe Acute Respiratory Illness. Samples were processed according to CDC protocol. Sequencing and phylogenetic analysis of Haemagglutinin (HA) and neuraminidase (NA) genes was carried out comparing representative isolates from Pakistan viruses. Results: Between Jan2009 - Feb 2016, 1870 (13.2%) samples were positive for influenza A out of 14086. During the pandemic period (2009–10), Influenza A/ H1N1pdm 09 was the dominant strain with 366 (45%) of total influenza positives. In the post-pandemic period (2011–2016), a total of 1066 (59.6%) cases were positive Influenza A/ H1N1pdm 09 with co-circulation of different Influenza A subtypes. Overall, the Pakistan A(H1N1) pdm09 viruses grouped in two genetic clades. Influenza A(H1N1)pdm09 viruses only ascribed to Clade 7 during the pandemic period whereas viruses belong to clade 7 (2011) and clade 6B (2015) during the post-pandemic years. Amino acid analysis of the HA gene revealed mutations at positions S220T, I338V and P100S specially associated with outbreaks in all the analyzed strains. Sequence analyses of post-pandemic A(H1N1)pdm09 viruses showed additional substitutions at antigenic sites; S179N,K180Q (SA), D185N, D239G (CA), S202A (SB) and at receptor binding sites; A13T, S200P when compared with pandemic period. Substitution at Genetic markers; A273T (69%), S200P/T (15%) and D239G (7.6%) associated with severity and E391K (69%) associated with virulence was identified in viruses isolated during 2015. Analysis of NA gene revealed outbreak markers; V106I (23%) among pandemic and N248D (100%) during post-pandemic Pakistan viruses. Additional N-Glycosylation site; HA S179N (23%), NA I23T(7.6%) and N44S (77%) in place of N386K(77%) were only found in post-pandemic viruses. All isolates showed histidine (H) at position 275 in NA indicating sensitivity to neuraminidase inhibitors. Conclusion: This study shows that the Influenza A(H1N1)pdm09 viruses from Pakistan clustered into two genetic clades, with co-circulation of some variants. Certain key substitutions in the receptor binding site and few changes indicative of virulence were also detected in post-pandemic strains. Therefore, it is imperative to continue monitoring of the viruses for early identification of potential variants of high virulence or emergence of drug-resistant variants.Keywords: Influenza A (H1N1) pdm09, evolutionary analysis, post pandemic period, Pakistan
Procedia PDF Downloads 207143 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 316142 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill
Procedia PDF Downloads 362141 Preliminary Analysis on the Distribution of Elements in Cannabis
Authors: E. Zafeiraki, P. Nisianakis, K. Machera
Abstract:
Cannabis plant contains 113 cannabinoids and it is commonly known for its psychoactive substance tetrahydrocannabinol or as a source of narcotic substances. The recent years’ cannabis cultivation also increases due to its wide use both for medical and industrial purposes as well as for uses as para-pharmaceuticals, cosmetics and food commodities. Depending on the final product, different parts of the plant are utilized, with the leaves and bud (seeds) being the most frequently used. Cannabis can accumulate various contaminants, including heavy metals, both from the soil and the water in which the plant grows. More specifically, metals may occur naturally in the soil and water, or they can enter into the environment through fertilizers, pesticides and fungicides that are commonly applied to crops. The high probability of metals accumulation in cannabis, combined with the latter growing use, raise concerns about the potential health effects in humans and consequently lead to the need for the implementation of safety measures for cannabis products, such as guidelines for regulating contaminants, including metals, and especially the ones characterized by high toxicity in cannabis. Acknowledging the above, the aim of the current study was first to investigate metals contamination in cannabis samples collected from Greece, and secondly to examine potential differences in metals accumulation among the different parts of the plant. To our best knowledge, this is the first study presenting information on elements in cannabis cultivated in Greece, and also on the distribution pattern of the former in the plant body. To this end, the leaves and the seeds of all the samples were initially separated and dried and then digested with Nitric acid (HNO₃) and Hydrochloric acid (HCl). For the analysis of these samples, an Inductive Coupled Plasma-Mass Spectrometry (ICP-MS) method was developed, able to quantify 28 elements. Internal standards were added at a constant rate and concentration to all calibration standards and unknown samples, while two certified reference materials were analyzed in every batch to ensure the accuracy of the measurements. The repeatability of the method and the background contamination were controlled by the analysis of quality control (QC) standards and blank samples in every sequence, respectively. According to the results, essential metals, such as Ca, Zn and Mg, were detected at high levels. On the contrary, the concentration of high toxicity metals, like As (average: 0.10ppm), Pb (average: 0.36ppm), Cd (average: 0.04ppm), and Hg (average: 0.012ppm) were very low in all the samples, indicating that no harmful effects on human health can be caused by the analyzed samples. Moreover, it appears that the pattern of contamination of metals is very similar in all the analyzed samples, which could be attributed to the same origin of the analyzed cannabis, i.e., the common soil composition, use of fertilizers, pesticides, etc. Finally, as far as the distribution pattern between the different parts of the plant is concerned, it was revealed that leaves present a higher concentration in comparison to seeds for all metals examined.Keywords: cannabis, heavy metals, ICP-MS, leaves and seeds, elements
Procedia PDF Downloads 99140 Nuclear Near Misses and Their Learning for Healthcare
Authors: Nick Woodier, Iain Moppett
Abstract:
Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.Keywords: culture, definitions, near miss, nuclear safety, patient safety
Procedia PDF Downloads 104139 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 257138 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128137 Evaluation of Iron Application Method to Remediate Coastal Marine Sediment
Authors: Ahmad Seiar Yasser
Abstract:
Sediment is an important habitat for organisms and act as a store house for nutrients in aquatic ecosystems. Hydrogen sulfide is produced by microorganisms in the water columns and sediments, which is highly toxic and fatal to benthic organisms. However, the irons have the capacity to regulate the formation of sulfide by poising the redox sequence and to form insoluble iron sulfide and pyrite compounds. Therefore, we conducted two experiments aimed to evaluate the remediation efficiency of iron application to organically enrich and improve sediments environment. Experiments carried out in the laboratory using intact sediment cores taken from Mikawa Bay, Japan at every month from June to September 2017 and October 2018. In Experiment 1, after cores were collected, the iron powder or iron hydroxide were applied to the surface sediment with 5 g/ m2 or 5.6 g/ m2, respectively. In Experiment 2, we experimentally investigated the removal of hydrogen sulfide using (2mm or less and 2 to 5mm) of the steelmaking slag. Experiments are conducted both in the laboratory with the same boundary conditions. The overlying water were replaced with deoxygenated filtered seawater, and cores were sealed a top cap to keep anoxic condition with a stirrer to circulate the overlying water gently. The incubation experiments have been set in three treatments included the control, and each treatment replicated and were conducted with the same temperature of the in-situ conditions. Water samples were collected to measure the dissolved sulfide concentrations in the overlying water at appropriate time intervals by the methylene blue method. Sediment quality was also analyzed after the completion of the experiment. After the 21 days incubation, experimental results using iron powder and ferric hydroxide revealed that application of these iron containing materials significantly reduced sulfide release flux from the sediment into the overlying water. The average dissolved sulfides concentration in the overlying water of the treatment group was significantly decrease (p = .0001). While no significant difference was observed between the control group after 21 day incubation. Therefore, the application of iron to the sediment is a promising method to remediate contaminated sediments in a eutrophic water body, although ferric hydroxide has better hydrogen sulfide removal effects. Experiments using the steelmaking slag also clarified the fact that capping with (2mm or less and 2 to 5mm) of slag steelmaking is an effective technique for remediation of bottom sediments enriched organic containing hydrogen sulfide because it leads to the induction of chemical reaction between Fe and sulfides occur in sediments which did not occur in conditions naturally. Although (2mm or less) of slag steelmaking has better hydrogen sulfide removal effects. Because of economic reasons, the application of steelmaking slag to the sediment is a promising method to remediate contaminated sediments in the eutrophic water body.Keywords: sedimentary, H2S, iron, iron hydroxide
Procedia PDF Downloads 163136 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening
Authors: Jaroslaw Gawryluk, Andrzej Teter
Abstract:
Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: angle column, compression, experiment, FEM
Procedia PDF Downloads 206135 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286134 Morpho-Agronomic Response to Water Stress of Some Nigerian Bambara Groundnut (Vigna Subterranea (L.) Verdc.) Germplasm and Genetic Diversity Studies of Some Selected Accessions Using Ssr Markers
Authors: Abejide Dorcas Ropo, , Falusi Olamide Ahmed, Daudu Oladipupo Abdulazeez Yusuf, Salihu Bolaji Zuluquri Neen, Muhammad Liman Muhammad, Gado Aishatu Adamu
Abstract:
Water stress is a major factor limiting the productivity of crops in the world today. This study evaluated the morpho-agronomic response of twenty-four (24) Nigerian Bambara groundnut landraces to water stress and genetic diversity of some selected accessions using SSR markers. The studies was carried out in the Botanical garden of the Department of Plant Biology, Federal University of Technology, Minna, Niger State, Nigeria in a randomized complete block design using three replicates. Molecular analysis using SSR primers was carried out at the Centre for Bio- Science, International Institute of Tropical Agriculture (IITA) Ibadan, Nigeria in order to characterize ten selected accessions comprising of the seven most drought tolerant and the three most susceptible accessions detected from the morpho-agronomic studies. Results revealed that water stress decreased morpho-agronomic traits such as plant height, leaf area, number of leaves per plant and seed yield etc. A total of 22 alleles were detected by the SSR markers used with a mean number of 4 allelles. Simple Sequence Repeat (SSR) markers MBamCO33, Primer 65 and G358B2-D15 each detected 4 allelles while Primer 3FR and 4FR detected 5 allelles each. The study revealed significantly high polymorphisms in 10 Loci. The mean value of Polymorpic information content was 0.6997 implying the usefulness of the primers used in identifying genetic similarities and differences among the Bambara groundnut genotypes. The SSR analysis revealed a comparable pattern between genetic diversity and drought tolerance of the genotypes. The Unweighted Paired Group Method with Arithmethic Mean (UPGMA) dendrogram showed that at a genetic distance of 0.1, the accessions were grouped into three groups according to their level of tolerance to drought. The two most drought tolerant accessions were grouped together and the 5th and 6th most drought tolerant accession were also grouped together. This suggests that the genotypes grouped together may be genetically close, may possess similar genes or have a common origin. The degree of genetic variants obtained could be useful in bambara groundnut breeding for drought tolerance. The identified drought tolerant bambara groundnut landraces are important genetic resources for drought stress tolerance breeding programme of bambara groundnut. The genotypes are also useful for germplasm conservation and global implications.Keywords: bambara groundnut, genetic diversity, germplasm, SSR markers, water stress
Procedia PDF Downloads 20133 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 133132 New Bio-Strategies for Ochratoxin a Detoxification Using Lactic Acid Bacteria
Authors: José Maria, Vânia Laranjo, Luís Abrunhosa, António Inês
Abstract:
The occurrence of mycotoxigenic moulds such as Aspergillus, Penicillium and Fusarium in food and feed has an important impact on public health, by the appearance of acute and chronic mycotoxicoses in humans and animals, which is more severe in the developing countries due to lack of food security, poverty and malnutrition. This mould contamination also constitutes a major economic problem due the lost of crop production. A great variety of filamentous fungi is able to produce highly toxic secondary metabolites known as mycotoxins. Most of the mycotoxins are carcinogenic, mutagenic, neurotoxic and immunosuppressive, being ochratoxin A (OTA) one of the most important. OTA is toxic to animals and humans, mainly due to its nephrotoxic properties. Several approaches have been developed for decontamination of mycotoxins in foods, such as, prevention of contamination, biodegradation of mycotoxins-containing food and feed with microorganisms or enzymes and inhibition or absorption of mycotoxin content of consumed food into the digestive tract. Some group of Gram-positive bacteria named lactic acid bacteria (LAB) are able to release some molecules that can influence the mould growth, improving the shelf life of many fermented products and reducing health risks due to exposure to mycotoxins. Some LAB are capable of mycotoxin detoxification. Recently our group was the first to describe the ability of LAB strains to biodegrade OTA, more specifically, Pediococcus parvulus strains isolated from Douro wines. The pathway of this biodegradation was identified previously in other microorganisms. OTA can be degraded through the hydrolysis of the amide bond that links the L-β-phenylalanine molecule to the ochratoxin alpha (OTα) a non toxic compound. It is known that some peptidases from different origins can mediate the hydrolysis reaction like, carboxypeptidase A an enzyme from the bovine pancreas, a commercial lipase and several commercial proteases. So, we wanted to have a better understanding of this OTA degradation process when LAB are involved and identify which molecules where present in this process. For achieving our aim we used some bioinformatics tools (BLAST, CLUSTALX2, CLC Sequence Viewer 7, Finch TV). We also designed specific primers and realized gene specific PCR. The template DNA used came from LAB strains samples of our previous work, and other DNA LAB strains isolated from elderberry fruit, silage, milk and sausages. Through the employment of bioinformatics tools it was possible to identify several proteins belonging to the carboxypeptidase family that participate in the process of OTA degradation, such as serine type D-Ala-D-Ala carboxypeptidase and membrane carboxypeptidase. In conclusions, this work has identified carboxypeptidase proteins being one of the molecules present in the OTA degradation process when LAB are involved.Keywords: carboxypeptidase, lactic acid bacteria, mycotoxins, ochratoxin a.
Procedia PDF Downloads 462131 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 112130 Reproductive Behavior of the Red Sea Immigrant Lagocephalus sceleratus (Gmelin, 1789) from the Mediterranean Coast, Egypt
Authors: Mahmoud M. S. Farrag, Alaa A. K. Elhaweet, El-Sayed Kh. A. Akel, Mohsen A. Moustafa
Abstract:
The present work aimed to study the reproductive strategy of the common lessepsian puffer fish Lagocephalus sceleratus (Gmelin, 1879) from the Egyptian Mediterranean Waters. It is a famous migratory species plays an important role in the field of fisheries and ecology of aquatic ecosystem. The specimens were collected monthly from the landing centers along the Egyptian Mediterranean coast during 2012. Six maturity stages were recorded: (I) Thread like stage, (II) Immature stage (Virgin stage), (III) Maturing stage (Developing Virgin and recovering spent), (IV) Nearly ripe stage, (V) Fully ripe, (VI) Spawning stage, (VII) Spent stage. According to sex ratio, males exhibited higher number than females representing 52.44 % of the total fishes with sex ratio 1: 0.91. Fish length corresponding to 50% maturation was 38.5 cm for males and 41 cm for females. The corresponding ages (age at first maturity) are equal to 2.14 and 2.27 years for male and female respectively. The ova diameter ranged from 0.02mm to 0.85mm, the mature ova ranged from 0.16mm to 0.85mm and showed progressive increase from April towards September. Also, the presences of ova diameter in one peak of mature and ripe eggs in the ovaries were observed during spawning period. The relationship between gutted weight and absolute fecundity indicated that that fecundity increased as the fish grew in weight. The absolute fecundity ranged from 260288 to 2372931 for fish weight ranged from 698 to 3285 cm with an average of 1449522±720975. The relative fecundity ranged from 373 to 722 for fish weight ranged from 698 to 3285 cm with an average of 776±231. The spawning season of L. sceleratus was investigated from the data of gonado-somatic index and monthly distribution of maturity stages along the year as well as sequence of ova diameter for mature stages and exhibited a relatively prolong spawning season extending from April for both sexes and ends in August for male while ends in September for female. Fish releases its ripe ova in one batch during the spawning season. Histologically, the ovarian cycle of L. sceleratus was classified into six stages and the testicular cycle into five stages. Histological characters of gonads of L. sceleratus during the year of study had confirmed the previous results of distribution of maturity stages, gonado-somatic index and ova diameter, indicating this fish species has prolonged spawning season from April to September. This species is considered totally or uni spawner with synchronous group as it contained one to two developmental stages at the same gonad.Keywords: Lagocephalus sceleratus, reproductive biology, oogenesis, histology
Procedia PDF Downloads 304129 Damage Tolerance of Composites Containing Hybrid, Carbon-Innegra, Fibre Reinforcements
Authors: Armin Solemanifar, Arthur Wilkinson, Kinjalkumar Patel
Abstract:
Carbon fibre (CF) - polymer laminate composites have very low densities (approximately 40% lower than aluminium), high strength and high stiffness but in terms of toughness properties they often require modifications. For example, adding rubbers or thermoplastics toughening agents are common ways of improving the interlaminar fracture toughness of initially brittle thermoset composite matrices. The main aim of this project was to toughen CF-epoxy resin laminate composites using hybrid CF-fabrics incorporating Innegra™ a commercial highly-oriented polypropylene (PP) fibre, in which more than 90% of its crystal orientation is parallel to the fibre axis. In this study, the damage tolerance of hybrid (carbon-Innegra, CI) composites was investigated. Laminate composites were produced by resin-infusion using: pure CF fabric; fabrics with different ratios of commingled CI, and two different types of pure Innegra fabrics (Innegra 1 and Innegra 2). Dynamic mechanical thermal analysis (DMTA) was used to measure the glass transition temperature (Tg) of the composite matrix and values of flexural storage modulus versus temperature. Mechanical testing included drop-weight impact, compression-after-impact (CAI), and interlaminar (short-beam) shear strength (ILSS). Ultrasonic C-Scan imaging was used to determine the impact damage area and scanning electron microscopy (SEM) to observe the fracture mechanisms that occur during failure of the composites. For all composites, 8 layers of fabrics were used with a quasi-isotropic sequence of [-45°, 0°, +45°, 90°]s. DMTA showed the Tg of all composites to be approximately same (123 ±3°C) and that flexural storage modulus (before the onset of Tg) was the highest for the pure CF composite while the lowest were for the Innegra 1 and 2 composites. Short-beam shear strength of the commingled composites was higher than other composites, while for Innegra 1 and 2 composites only inelastic deformation failure was observed during the short-beam test. During impact, the Innegra 1 composite withstood up to 40 J without any perforation while for the CF perforation occurred at 10 J. The rate of reduction in compression strength upon increasing the impact energy was lowest for the Innegra 1 and 2 composites, while CF showed the highest rate. On the other hand, the compressive strength of the CF composite was highest of all the composites at all impacted energy levels. The predominant failure modes for Innegra composites observed in cross-sections of fractured specimens were fibre pull-out, micro-buckling, and fibre plastic deformation; while fibre breakage and matrix delamination were a major failure observed in the commingled composites due to the more brittle behaviour of CF. Thus, Innegra fibres toughened the CF composites but only at the expense of reducing compressive strength.Keywords: hybrid composite, thermoplastic fibre, compression strength, damage tolerance
Procedia PDF Downloads 295128 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233127 Recognising and Managing Haematoma Following Thyroid Surgery: Simulation Teaching is Effective
Authors: Emily Moore, Dora Amos, Tracy Ellimah, Natasha Parrott
Abstract:
Postoperative haematoma is a well-recognised complication of thyroid surgery with an incidence of 1-5%. Haematoma formation causes progressive airway obstruction, necessitating emergency bedside haematoma evacuation in up to ¼ of patients. ENT UK, BAETS and DAS have developed consensus guidelines to improve perioperative care, recommending that all healthcare staff interacting with patients undergoing thyroid surgery should be trained in managing post-thyroidectomy haematoma. The aim was to assess the effectiveness of a hybrid simulation model in improving clinician’s confidence in dealing with this surgical emergency. A hybrid simulation was designed, consisting of a standardised patient wearing a part-task trainer to mimic a post-thyroidectomy haematoma in a real patient. The part-task trainer was an adapted C-spine collar with layers of silicone representing the skin and strap muscles and thickened jelly representing the haematoma. Both the skin and strap muscle layers had to be opened in order to evacuate the haematoma. Boxes have been implemented into the appropriate post operative areas (recovery and surgical wards), which contain a printed algorithm designed to assist in remembering a sequence of steps for haematoma evacuation using the ‘SCOOP’ method (skin exposure, cut sutures, open skin, open muscles, pack wound) along with all the necessary equipment to open the front of the neck. Small-group teaching sessions were delivered by ENT and anaesthetic trainees to members of the multidisciplinary team normally involved in perioperative patient care, which included ENT surgeons, anaesthetists, recovery nurses, HCAs and ODPs. The DESATS acronym of signs and symptoms to recognise (difficulty swallowing, EWS score, swelling, anxiety, tachycardia, stridor) was highlighted. Then participants took part in the hybrid simulation in order to practice this ‘SCOOP’ method of haematoma evacuation. Participants were surveyed using a Likert scale to assess their level of confidence pre- and post teaching session. 30 clinicians took part. Confidence (agreed/strongly agreed) in recognition of post thyroidectomy haematoma improved from 58.6% to 96.5%. Confidence in management improved from 27.5% to 89.7%. All participants successfully decompressed the haematoma. All participants agreed/strongly agreed, that the sessions were useful for their learning. Multidisciplinary team simulation teaching is effective at significantly improving confidence in both the recognition and management of postoperative haematoma. Hybrid simulation sessions are useful and should be incorporated into training for clinicians.Keywords: thyroid surgery, haematoma, teaching, hybrid simulation
Procedia PDF Downloads 96126 High Impact Biostratigrapgic Study
Abstract:
The re-calibration of the Campanian to Maastritchian of some parts Anambra basin was carried outusing samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and Paleontological analyses werecarried out on 100 ditch cutting samples. The faunal and floral succession were of terrestrialand marine origin as described and logged. The well penetrated four stratigraphic units inAnambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preservedformanifera and palynormorphs. The well yielded 53 species of foram and 69 species ofpalynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 sporesassemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – earlyPaleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst).The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are asfollows. AmamaBiozone A-Globotruncanellahavanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovellavelascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanellahavanensis zone: Late Campanian(1512m) Biozone B-Bolivinaafra, B. explicate zone: Maastrichtian (634–1204m) BiozoneC- Indeterminate (305 – 634m) Palynological Amama-1 A.Ctenolophoniditescostatus zone:Early Maastrichtian (1829m) B-Retidiporitesminiporatus Zone: Late Maastrichtian (1274m)Constructipollenitesineffectus Zone: Early Paleocene(695m) Bara-1 Droseriditessenonicus Zone: Late Campanian (994– 1600m) B. Ctenolophoniditescostatus Zone: EarlyMaastrichtian (713–994m) C. Retidiporitesminiporatus Zone: Late Maastrichtian (305 –713m) The paleo – environment of deposition were determined to range from non-marine toouter netritic. A detailed categorization of the palynormorphs into terrestrially derivedpalynormorphs and marine derived palynormorphs based on the distribution of three broadvegetation types; mangrove, fresh water swamps and hinther land communities were used toevaluate sea level fluctuations with respect to sediments deposited in the basins and linkedwith a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m-1533m and 1581 dated b/w 634m-1387m, dated 69.5ma-82ma and four sequenceboundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application ofecostratigraphic description is characterised by the prominent expansion of the hinterlandcomponent consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.Keywords: formanifera, palynomorphs. campanian, maastritchian, ecostratigraphic anambra
Procedia PDF Downloads 29125 Effect of Chemical Fertilizer on Plant Growth-Promoting Rhizobacteria in Wheat
Authors: Tessa E. Reid, Vanessa N. Kavamura, Maider Abadie, Adriana Torres-Ballesteros, Mark Pawlett, Ian M. Clark, Jim Harris, Tim Mauchline
Abstract:
The deleterious effect of chemical fertilizer on rhizobacterial diversity has been well documented using 16S rRNA gene amplicon sequencing and predictive metagenomics. Biofertilization is a cost-effective and sustainable alternative; improving strategies depends on isolating beneficial soil microorganisms. Although culturing is widespread in biofertilization, it is unknown whether the composition of cultured isolates closely mirrors native beneficial rhizobacterial populations. This study aimed to determine the relative abundance of culturable plant growth-promoting rhizobacteria (PGPR) isolates within total soil DNA and how potential PGPR populations respond to chemical fertilization in a commercial wheat variety. It was hypothesized that PGPR will be reduced in fertilized relative to unfertilized wheat. Triticum aestivum cv. Cadenza seeds were sown in a nutrient depleted agricultural soil in pots treated with and without nitrogen-phosphorous-potassium (NPK) fertilizer. Rhizosphere and rhizoplane samples were collected at flowering stage (10 weeks) and analyzed by culture-independent (amplicon sequence variance (ASV) analysis of total rhizobacterial DNA) and -dependent (isolation using growth media) techniques. Rhizosphere- and rhizoplane-derived microbiota culture collections were tested for plant growth-promoting traits using functional bioassays. In general, fertilizer addition decreased the proportion of nutrient-solubilizing bacteria (nitrate, phosphate, potassium, iron and, zinc) isolated from rhizocompartments in wheat, whereas salt tolerant bacteria were not affected. A PGPR database was created from isolate 16S rRNA gene sequences and searched against total soil DNA, revealing that 1.52% of total community ASVs were identified as culturable PGPR isolates. Bioassays identified a higher proportion of PGPR in non-fertilized samples (rhizosphere (49%) and rhizoplane (91%)) compared to fertilized samples (rhizosphere (21%) and rhizoplane (19%)) which constituted approximately 1.95% and 1.25% in non-fertilized and fertilized total community DNA, respectively. The analyses of 16S rRNA genes and deduced functional profiles provide an in-depth understanding of the responses of bacterial communities to fertilizer; this study suggests that rhizobacteria, which potentially benefit plants by mobilizing insoluble nutrients in soil, are reduced by chemical fertilizer addition. This knowledge will benefit the development of more targeted biofertilization strategies.Keywords: bacteria, fertilizer, microbiome, rhizoplane, rhizosphere
Procedia PDF Downloads 307124 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 126123 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224122 Sociology Perspective on Emotional Maltreatment: Retrospective Case Study in a Japanese Elementary School
Authors: Nozomi Fujisaka
Abstract:
This sociological case study analyzes a sequence of student maltreatment in an elementary school in Japan, based on narratives from former students. Among various forms of student maltreatment, emotional maltreatment has received less attention. One reason for this is that emotional maltreatment is often considered part of education and is difficult to capture in surveys. To discuss the challenge of recognizing emotional maltreatment, it's necessary to consider the social background in which student maltreatment occurs. Therefore, from the perspective of the sociology of education, this study aims to clarify the process through which emotional maltreatment was embraced by students within a Japanese classroom. The focus of this study is a series of educational interactions by a homeroom teacher with 11- or 12-year-old students at a small public elementary school approximately 10 years ago. The research employs retrospective narrative data collected through interviews and autoethnography. The semi-structured interviews, lasting one to three hours each, were conducted with 11 young people who were enrolled in the same class as the researcher during their time in elementary school. Autoethnography, as a critical research method, contributes to existing theories and studies by providing a critical representation of the researcher's own experiences. Autoethnography enables researchers to collect detailed data that is often difficult to verbalize in interviews. These research methods are well-suited for this study, which aims to shift the focus from teachers' educational intentions to students' perspectives and gain a deeper understanding of student maltreatment. The research results imply a pattern of emotional maltreatment that is challenging to differentiate from education. In this study's case, the teacher displayed calm and kind behavior toward students after a threat and an explosion of anger. Former students frequently mentioned this behavior of the teacher and perceived emotional maltreatment as part of education. It was not uncommon for former students to offer positive evaluations of the teacher despite experiencing emotional distress. These findings are analyzed and discussed in conjunction with the deschooling theory and the cycle of violence theory. The deschooling theory provides a sociological explanation for how emotional maltreatment can be overlooked in society. The cycle of violence theory, originally developed within the context of domestic violence, explains how violence between romantic partners can be tolerated due to prevailing social norms. Analyzing the case in association with these two theories highlights the characteristics of teachers' behaviors that rationalize maltreatment as education and hinder students from escaping emotional maltreatment. This study deepens our understanding of the causes of student maltreatment and provides a new perspective for future qualitative and quantitative research. Furthermore, since this research is based on the sociology of education, it has the potential to expand research in the fields of pedagogy and sociology, in addition to psychology and social welfare.Keywords: emotional maltreatment, education, student maltreatment, Japan
Procedia PDF Downloads 84121 Explaining Irregularity in Music by Entropy and Information Content
Authors: Lorena Mihelac, Janez Povh
Abstract:
In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM
Procedia PDF Downloads 131120 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull
Authors: Elsher Lawson-Boyd
Abstract:
In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.Keywords: genetics, mental illness, neuroepigenetics, trauma
Procedia PDF Downloads 124