Search results for: Michael Page
227 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event
Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette
Abstract:
Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas
Procedia PDF Downloads 80226 Insecticidal Activity of Bacillus Thuringiensis Strain AH-2 Against Hemiptera Insects Pests: Aphis. Gossypii, and Lepidoptera Insect Pests: Plutella Xylostella and Hyphantria Cunea
Authors: Ajuna B. Henry
Abstract:
In recent decades, climate change has demanded biological pesticides; more Bt strains are being discovered worldwide, some containing novel insecticidal genes while others have been modified through molecular approaches for increased yield, toxicity, and wider host target. In this study, B. thuringiensis strain AH-2 (Bt-2) was isolated from the soil and tested for insecticidal activity against Aphis gossypii (Hemiptera: Aphididae) and Lepidoptera insect pests: fall webworm (Hyphantria cunea) and diamondback moth (Plutella xylostella). A commercial strain B. thuringiensis subsp. kurstaki (Btk), and a chemical pesticide, imidacloprid (for Hemiptera) and chlorantraniliprole (for Lepidoptera), were used as positive control and the same media (without bacterial inoculum) as a negative control. For aphidicidal activity, Bt-2 caused a mortality rate of 70.2%, 78.1% or 88.4% in third instar nymphs of A. gossypii (3N) at 10%, 25% or 50% culture concentrations, respectively. Moreover, Bt-2 was effectively produced in cost-effective (PB) supplemented with either glucose (PBG) or sucrose (PBS) and maintained high aphicidal efficacy with 3N mortality rates of 85.9%, 82.9% or 82.2% in TSB, PBG or PBS media, respectively at 50% culture concentration. Bt-2 also suppressed adult fecundity by 98.3% compared to only 65.8% suppression by Btk at similar concentrations but was slightly lower than chemical treatment, which caused 100% suppression. Partial purification of 60 – 80% (NH4)2SO4 fraction of Bt-2 aphicidal proteins purified on anion exchange (DEAE-FF) column revealed a 105 kDa aphicidal protein with LC50 = 55.0 ng/µℓ. For Lepidoptera pests, chemical pesticide, Bt-2, and Btk cultures, mortality of 86.7%, 60%, and 60% in 3rd instar larvae of P. xylostella, and 96.7%, 80.0%, and 93.3% in 6th instar larvae of H. cunea, after 72h of exposure. When the entomopathogenic strains were cultured in a cost-effective PBG or PBS, the insecticidal activity in all strains was not significantly different compared to the use of a commercial medium (TSB). Bt-2 caused a mortality rate of 60.0%, 63.3%, and 50.0% against P. xylostella larvae and 76.7%, 83.3%, and 73.3% against H. cunea when grown in TSB, PBG, and PBS media, respectively. Bt-2 (grown in cost-effective PBG medium) caused a dose-dependent toxicity of 26.7%, 40.0%, and 63.3% against P. xylostella and 46.7%, 53.3%, and 76.7% against H. cunea at 10%, 25% and 50% culture concentration, respectively. The partially purified Bt-2 insecticidal proteins fractions F1, F2, F3, and F4 (extracted at different ratios of organic solvent) caused low toxicity (50.0%, 40.0%, 36.7%, and 30.0%) against P. xylostella and relatively high toxicity (56.7%, 76.7%, 66.7%, and 63.3%) against H. cunea at 100 µg/g of artificial diets. SDS-PAGE analysis revealed that a128kDa protein is associated with toxicity of Bt-2. Our result demonstrates a medium and strong larvicidal activity of Bt-2 against P. xylostella and H. cunea, respectively. Moreover, Bt-2 could be potentially produced using a cost-effective PBG medium which makes it an effective alternative biocontrol strategy to reduce chemical pesticide application.Keywords: biocontrol, insect pests, larvae/nymph mortality, cost-effective media, aphis gossypii, plutella xylostella, hyphantria cunea, bacillus thuringiensi
Procedia PDF Downloads 20225 Everolimus Loaded Polyvinyl Alcohol Microspheres for Sustained Drug Delivery in the Treatment of Subependymal Giant Cell Astrocytoma
Authors: Lynn Louis, Bor Shin Chee, Marion McAfee, Michael Nugent
Abstract:
This article aims to develop a sustained release formulation of microspheres containing the mTOR inhibitor Everolimus (EVR) using Polyvinyl alcohol (PVA) to enhance the bioavailability of the drug and to overcome poor solubility characteristics of Everolimus. This paper builds on recent work in the manufacture of microspheres using the sessile droplet technique by freezing the polymer-drug solution by suspending the droplets into pre-cooled ethanol vials immersed in liquid nitrogen. The spheres were subjected to 6 freezing cycles and 3 freezing cycles with thawing to obtain proper geometry, prevent aggregation, and achieve physical cross-linking. The prepared microspheres were characterised for surface morphology by SEM, where a 3-D porous structure was observed. The in vitro release studies showed a 62.17% release over 12.5 days, indicating a sustained release due to good encapsulation. This result is comparatively much more than the 49.06% release achieved within 4 hours from the solvent cast Everolimus film as a control with no freeze-thaw cycles performed. The solvent cast films were made in this work for comparison. A prolonged release of Everolimus using a polymer-based drug delivery system is essential to reach optimal therapeutic concentrations in treating SEGA tumours without systemic exposure. These results suggest that the combination of PVA and Everolimus via a rheological synergism enhanced the bioavailability of the hydrophobic drug Everolimus. Physical-chemical characterisation using DSC and FTIR analysis showed compatibility of the drug with the polymer, and the stability of the drug was maintained owing to the high molecular weight of the PVA. The obtained results indicate that the developed PVA/EVR microsphere is highly suitable as a potential drug delivery system with improved bioavailability in treating Subependymal Giant cell astrocytoma (SEGA).Keywords: drug delivery system, everolimus, freeze-thaw cycles, polyvinyl alcohol
Procedia PDF Downloads 131224 Outcomes of the Gastrocnemius Flap Performed by Orthopaedic Surgeons in Salvage Revision Knee Arthroplasty: A Retrospective Study at a Tertiary Orthopaedic Centre
Authors: Amirul Adlan, Robert McCulloch, Scott Evans, Michael Parry, Jonathan Stevenson, Lee Jeys
Abstract:
Background and Objectives: The gastrocnemius myofascial flap is used to manage soft-tissue defects over the anterior aspect of the knee in the context of a patient presenting with a sinus and periprosthetic joint infection (PJI) or extensor mechanism failure. The aim of this study was twofold: firstly, to evaluate the outcomes of gastrocnemius flaps performed by appropriately trained orthopaedic surgeons in the context of PJI and, secondly, to evaluate the infection-free survival of this patient group. Methods: We retrospectively reviewed 30 patients who underwent gastrocnemius flap reconstruction during staged revision total knee arthroplasty for prosthetic joint infection (PJI). All flaps were performed by an orthopaedic surgeon with orthoplastics training. Patients had a mean age of 68.9 years (range 50–84) and were followed up for a mean of 50.4 months (range 2–128 months). A total of 29 patients (97 %) were categorized into Musculoskeletal Infection Society (MSIS) local extremity grade 3 (greater than two compromising factors), and 52 % of PJIs were polymicrobial. The primary outcome measure was flap failure, and the secondary outcome measure was a recurrent infection. Results: Flap survival was 100% with no failures or early returns to theatre for flap problems such as necrosis or haematoma. Overall infection-free survival during the study period was 48% (13 of 27 infected cases). Using limb salvage as the outcome, 77% (23 of 30 patients) retained the limb. Infection recurrence occurred in 48% (10 patients) in the type B3 cohort and 67% (4 patients) in the type C3 cohort (p = 0.65). Conclusion: The surgical technique for a gastrocnemius myofascial flap is reliable and reproducible when performed by appropriately trained orthopaedic surgeons, even in high-risk groups. However, the risks of recurrent infection and amputation remain high within our series due to poor host and extremity factors.Keywords: gastrocnemius flap, limb salvage, revision arthroplasty, outcomes
Procedia PDF Downloads 111223 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 181222 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic
Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo
Abstract:
Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate
Procedia PDF Downloads 471221 Mindfulness in a Secular Age: Framing and Contextualising the Conversation in the Irish Context
Authors: Thomas P. Carroll
Abstract:
The phenomenon of mindfulness has become ever more popular in an increasingly pluralist Western society. Mindfulness practice has penetrated secular contexts that would otherwise be closed to religious influence, including state schools, hospitals, and commerce. The contemporary understanding of mindfulness has its origins in Buddhist meditation. However, since Jon Kabat-Zinn’s pioneering work in Mindfulness-Based Interventions, the concept has developed and sometimes mutated into various forms of practice which are disembedded from their original spiritual philosophy. This project will explore the spiritual climate within which mindfulness is currently flourishing through dialogue with three interlocutors. The first interlocutor is the Canadian philosopher Charles Taylor whose seminal work, ‘A Secular Age’, outlines three distinct modes of secularity. Taylor examines how the conditions of belief have changed and how the self seeks meaning in an age where belief in the divine is no longer axiomatic. The next interlocutor is Czech theologian and psychotherapist Tomáš Halík who offers a unique perspective of a Catholic who belongs to a section of society outnumbered by secular counterparts, with a theological hermeneutic best described as 'Den Fremden verstehen- understanding the stranger'. Finally, Irish theologian Michael Paul Gallagher offers a theological perspective on how the Christian faith can be translated into dialogue with Irish secular culture, as well as addressing the crisis of imagination and culture rather than the crisis of faith in Ireland. These interlocutors will illustrate that there are sometimes striking differences in how to interpret the religious signs of the times. However, these approaches also reveal significant similarities in how they address and explore the meaning of religious belief and experience today. In this way, themes will emerge that will help to frame the conversation about mindfulness in the West. These themes will include; the failure of the secularization thesis to pass, the growth of a diverse marketplace of religions and beliefs and the growth of a demographic who identify as spiritual but not religious. Such research is paramount in enabling a richer dialogue between Christian faith and mindfulness in a fragmented, postmodern Western context.Keywords: culture, mindfulness, secularism, spirituality
Procedia PDF Downloads 115220 Characterising Rates of Renal Dysfunction and Sarcoidosis in Patients with Elevated Serum Angiotensin-Converting Enzyme
Authors: Fergal Fouhy, Alan O’Keeffe, Sean Costelloe, Michael Clarkson
Abstract:
Background: Sarcoidosis is a systemic, non-infectious disease of unknown aetiology, characterized by non-caseating granulomatous inflammation. The lung is most often affected (90%); however, the condition can affect all organs, including the kidneys. There is limited evidence describing the incidence and characteristics of renal involvement in sarcoidosis. Serum angiotensin-converting enzyme (ACE) is a recognised biomarker used in the diagnosis and monitoring of sarcoidosis. Methods: A single-centre, retrospective cohort study of patients presenting to Cork University Hospital (CUH) in 2015 with first-time elevations of serum ACE was performed. This included an initial database review of ACE and other biochemistry results, followed by a medical chart review to confirm the presence or absence of sarcoidosis and management thereof. Acute kidney injury (AKI) was staged using the AKIN criteria, and chronic kidney disease (CKD) was staged using the KDIGO criteria. Follow-up was assessed over five years tracking serum creatinine, serum calcium, and estimated glomerular filtration rates (eGFR). Results: 119 patients were identified as having a first raised serum ACE in 2015. Seventy-nine male patients and forty female patients were identified. The mean age of patients identified was 47 years old. 11% had CKD at baseline. 18% developed an AKI at least once within the next five years. A further 6% developed CKD during this time period. 13% developed hypercalcemia. The patients within the lowest quartile of serums ACE had an incidence of sarcoidosis of 5%. None of this group developed hypercalcemia, 23% developed AKI, and 7% developed CKD. Of the patients with a serum ACE in the highest quartile, almost all had documented diagnoses of sarcoidosis with an incidence of 96%. 3% of this group developed hypercalcemia, 13% AKI and 3% developed CKD. Conclusions: There was an unexpectedly high incidence of AKI in patients who had a raised serum ACE. Not all patients with a raised serum ACE had a confirmed diagnosis of sarcoidosis. There does not appear to be a relationship between increased serum ACE levels and increased incidence of hypercalcaemia, AKI, and CKD. Ideally, all patients should have biopsy-proven sarcoidosis. This is an initial study that should be replicated with larger numbers and including multiple centres.Keywords: sarcoidosis, acute kidney injury, chronic kidney disease, hypercalcemia
Procedia PDF Downloads 104219 Bioengineering of a Plant System to Sustainably Remove Heavy Metals and to Harvest Rare Earth Elements (REEs) from Industrial Wastes
Authors: Edmaritz Hernandez-Pagan, Kanjana Laosuntisuk, Alex Harris, Allison Haynes, David Buitrago, Michael Kudenov, Colleen Doherty
Abstract:
Rare Earth Elements (REEs) are critical metals for modern electronics, green technologies, and defense systems. However, due to their dispersed nature in the Earth’s crust, frequent co-occurrence with radioactive materials, and similar chemical properties, acquiring and purifying REEs is costly and environmentally damaging, restricting access to these metals. Plants could serve as resources for bioengineering REE mining systems. Although there is limited information on how REEs affect plants at a cellular and molecular level, plants with high REE tolerance and hyperaccumulation have been identified. This dissertation aims to develop a plant-based system for harvesting REEs from industrial waste material with a focus on Acid Mine Drainage (AMD), a toxic coal mining product. The objectives are 1) to develop a non-destructive, in vivo detection method for REE detection in Phytolacca plants (REE hyperaccumulator) plants utilizing fluorescence spectroscopy and with a primary focus on dysprosium, 2) to characterize the uptake of REE and Heavy Metals in Phytolacca americana and Phytolacca acinosa (REE hyperaccumulator) in AMD for potential implementation in the plant-based system, 3) to implement the REE detection method to identify REE-binding proteins and peptides for potential enhancement of uptake and selectivity for targeted REEs in the plants implemented in the plant-based system. The candidates are known REE-binding peptides or proteins, orthologs of known metal-binding proteins from REE hyperaccumulator plants, and novel proteins and peptides identified by comparative plant transcriptomics. Lanmodulin, a high-affinity REE-binding protein from methylotrophic bacteria, is used as a benchmark for the REE-protein binding fluorescence assays and expression in A. thaliana to test for changes in REE plant tolerance and uptake.Keywords: phytomining, agromining, rare earth elements, pokeweed, phytolacca
Procedia PDF Downloads 18218 Loss of Function of Only One of Two CPR5 Paralogs Causes Resistance Against Rice Yellow Mottle Virus
Authors: Yugander Arra, Florence Auguy, Melissa Stiebner, Sophie Chéron, Michael M. Wudick, Van Schepler-Luu, Sébastien Cunnac, Wolf B. Frommer, Laurence Albar
Abstract:
Rice yellow mottle virus (RYMV) is one of the most important diseases affecting rice in Africa. The most promising strategy to reduce yield losses is the use of highly resistant varieties. The resistance gene RYMV2 is homolog of the Arabidopsis constitutive expression of pathogenesis related protein-5 (AtCPR5) nucleoporin gene. Resistance alleles are originating from African cultivated rice Oryza glaberrima, rarely cultivated, and are characterized by frameshifts or early stop codons, leading to a non-functional or truncated protein. Rice possesses two paralogs of CPR5 and function of these genes are unclear. Here, we evaluated the role of the two rice candidate nucleoporin paralogs OsCPR5.1 (pathogenesis-related gene 5; RYMV2) and OsCPR5.2 by CRISPR/Cas9 genome editing. Despite striking sequence and structural similarity, only loss-of-function of OsCPR5.1 led to full resistance, while loss-of-function oscpr5.2 mutants remained susceptible. Short N-terminal deletions in OsCPR5.1 also did not lead to resistance. In contrast to Atcpr5 mutants, neither OsCPR5.1 nor OsCPR5.2 knock out mutants showed substantial growth defects. Taken together, the candidate nucleoporin OsCPR5.1, but not its close homolog OsCPR5.2, plays a specific role for the susceptibility to RYMV, possibly by impairing the import of viral RNA or protein into the nucleus. Whereas gene introgression from O. glaberrima to high yielding O. sativa varieties is impaired by strong sterility barriers and the negative impact of linkage drag, genome editing of OsCPR5.1, while maintaining OsCPR5.2 activity, thus provides a promising strategy to generate O. sativa elite lines that are resistant to RYMV.Keywords: CRISPR Cas9, genome editing, knock out mutant, recessive resistance, rice yellow mottle virus
Procedia PDF Downloads 120217 MicroRNA in Bovine Corpus Luteum during Early Pregnancy
Authors: Rreze Gecaj, Corina Schanzenbach, Benedikt Kirchner, Michael Pfaffl, Bajram Berisha
Abstract:
The maintenance of corpus lutem (CL) during early pregnancy in cattle is a critical and multifarious process. A luteotrophic mechanism originating from the embryo is widely accepted as the triggering signal for the CL maintenance. In the cattle, it is the interferon-tau (IFNT) secretion form conceptus that prevents CL regression and ensures progesterone production for the establishment of pregnancy. In addition to endocrine and paracrine signals, microRNA (miRNA) can also support CL sustainability during early pregnancy. MiRNA are small non-coding nucleic acids that regulate gene expression post-transcriptionally and are shown to be involved in the modulation of CL function. However, the examination of miRNAs in corpus luteum function at the early pregnancy still remains largely uncovered. This study aims at profiling the expression of miRNA in CL during the early pregnancy in cattle by comparing it with the CL form late cycle and with the regressed CL. Corpora lutea were assigned in two different groups during the cycle (C13 group, late CL: days 13-18 and C18, regressed CL group: day >18) and during the early pregnancy (group P: 1-2 month). The estrous cycle was determined by macroscopic examination and to age the fetus crown-rump length measurement was applied. A total of 9 corpora lutea from individual animals were included in the study, three corpora lutea for each group. MiRNAs population was profiled using small RNA next-generation sequencing and biologically significant miRNAs were evaluated for their differential expression using the DESeq2-methodology. We show that 6 differentially expressed miRNAs (bta-mir-2890, -2332, -2441-3p, -148b, -1248 and -29c) are common to both comparisons, P vs C13 and P vs C18. While for each stage individually we have identified unique miRNAs differentially expressed only for the given comparison. bta-miR-23a and -769 were unique miRNAs differentially expressed in P vs C13, whereas forty-four unique miRNAs were identified as differentially expressed in P vs C18. These data confirm that miRNAs are highly abundant in luteal tissue during early pregnancy and potentially regulate the CL maintenance at this stage of fetus development.Keywords: bovine, corpus luteum, microRNA, pregnancy, RNA-Seq
Procedia PDF Downloads 260216 Mitigating Self-Regulation Issues in the Online Instruction of Math
Authors: Robert Vanderburg, Michael Cowling, Nicholas Gibson
Abstract:
Mathematics is one of the core subjects taught in the Australian K-12 education system and is considered an important component for future studies in areas such as engineering and technology. In addition to this, Australia has been a world leader in distance education due to the vastness of its geographic landscape. Despite this, research is still needed on distance math instruction. Even though delivery of curriculum has given way to online studies, and there is a resultant push for computer-based (PC, tablet, smartphone) math instruction, much instruction still involves practice problems similar to those original curriculum packs, without the ability for students to self-regulate their learning using the full interactive capabilities of these devices. Given this need, this paper addresses issues students have during online instruction. This study consists of 32 students struggling with mathematics enrolled in a math tutorial conducted in an online setting. The study used a case study design to understand some of the blockades hindering the students’ success. Data was collected by tracking students practice and quizzes, tracking engagement of the site, recording one-on-one tutorials, and collecting data from interviews with the students. Results revealed that when students have cognitively straining tasks in an online instructional setting, the first thing to dissipate was their ability to self-regulate. The results also revealed that instructors could ameliorate the situation and provided useful data on strategies that could be used for designing future online tasks. Specifically, instructors could utilize cognitive dissonance strategies to reduce the cognitive drain of the tasks online. They could segment the instruction process to reduce the cognitive demands of the tasks and provide in-depth self-regulatory training, freeing mental capacity for the mathematics content. Finally, instructors could provide specific scheduling and assignment structure changes to reduce the amount of student centered self-regulatory tasks in the class. These findings will be discussed in more detail and summarized in a framework that can be used for future work.Keywords: digital education, distance education, mathematics education, self-regulation
Procedia PDF Downloads 136215 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 451214 Internet Health: A Cross-Sectional Survey Exploring Identified Risks and Online Safety Measures in Parent and Children with Neurodevelopmental Disorders
Authors: Abdirahim Mohamed, Sarita Rana Chhetri, Michael Sleath, Nadia Saleem
Abstract:
Rationale: Internet usage has been very much integrated into our daily lives. Internet usage within a neurodevelopmental disorder population is also on the increase. Nevertheless, there is very little empirical research on how this population virtually protect themselves; along with how their parents can keep them safe online. This topic was an ever-growing concern to the parents within our services and in many cases would add to the stresses and mental health of parents. This ignited an idea within our team to conduct research to explore the perceived online risks within this population and how they keep themselves safe. In conjunction, we also explored how parents and caregivers monitor and safeguard their young people to the potential threats online. Our hypothesis was that the perceived risks will heavily outnumber the safeguarding measures implemented by this population. Method: Within the Coventry and Warwickshire NHS Partnership Trust Child and Adolescent Mental Health Service (CAMHS), we distributed qualitative questionnaires to all the clinical bases (N=80). Questions explored topics such as daily internet usage, safeguarding measures, and perceived threats. The researchers requested for all CAMHS clinicians to identify participants. Participants in this study were accessing CAMHS for neurodevelopmental specific interventions. Results: The data were analysed using both Excel and SPSS. Within SPSS, a MANOVA was conducted and found a significant difference between safeguarding measures and perceived online risks within responses (p ≤ 0.5). This supports our hypothesis that participants in this population are well versed in the safeguarding issues of the internet; however, struggle to implement appropriate preventative measures. Data were also screened using Excel and found that all parents and carers stated they 'monitored their child’s internet use'. Conclusion: Data suggest that parents/carers may require more specific intervention to equip them with preventative measures due to the clear discrepancy between perceived risks and safeguarding measures. More research may also need to be conducted around this area to determine appropriate methodology to explore this topic further.Keywords: Internet, health , how safe are we , internet health check
Procedia PDF Downloads 270213 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.Keywords: Kano model, mass customization, new product development, serious game
Procedia PDF Downloads 136212 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra
Authors: Michael Lousis
Abstract:
Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence
Procedia PDF Downloads 270211 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 154210 Improving the Weekend Handover in General Surgery: A Quality Improvement Project
Authors: Michael Ward, Eliana Kalakouti, Andrew Alabi
Abstract:
Aim: The handover process is recognized as a vulnerable step in the patient care pathway where errors are likely to occur. As such, it is a major preventable cause of patient harm due to human factors of poor communication and systematic error. The aim of this study was to audit the general surgery department’s weekend handover process compared to the recommended criteria for safe handover as set out by the Royal College of Surgeons (RCS). Method: A retrospective audit of the General Surgery department’s Friday patient lists and patient medical notes used for weekend handover in a London-based District General Hospital (DGH). Medical notes were analyzed against RCS's suggested criteria for handover. A standardized paper weekend handover proforma was then developed in accordance with guidelines and circulated in the department. A post-intervention audit was then conducted using the same methods for cycle 1. For cycle 2, we introduced an electronic weekend handover tool along with Electronic Patient Records (EPR). After a one-month period, a second post-intervention audit was conducted. Results: Following cycle 1, the paper weekend handover proforma was only used in 23% of patient notes. However, when it was used, 100% of them had a plan for the weekend, diagnosis and location but only 40% documented potential discharge status and 40% ceiling of care status. Qualitative feedback was that it was time-consuming to fill out. Better results were achieved following cycle 2, with 100% of patient notes having the electronic proforma. Results improved with every patient having documented ceiling of care, discharge status and location. Only 55% of patients had a past surgical history; however, this was still an increase when compared to paper proforma (45%). When comparing electronic versus paper proforma, there was an increase in documentation in every domain of the handover outlined by RCS with an average relative increase of 1.72 times (p<0.05). Qualitative feedback was that the autofill function made it easy to use and simple to view. Conclusion: These results demonstrate that the implementation of an electronic autofill handover proforma significantly improved handover compliance with RCS guidelines, thereby improving the transmission of information from week-day to weekend teams.Keywords: surgery, handover, proforma, electronic handover, weekend, general surgery
Procedia PDF Downloads 159209 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces
Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek
Abstract:
Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.Keywords: moisture, mold growth, testing, wood
Procedia PDF Downloads 133208 Navigating Life Transitions for Young People with Vision Impairment: A Community-Based Participatory Research Approach to Accessibility and Diversity
Authors: Aikaterini Tavoulari, Michael Proulx, Karin Petrini
Abstract:
Objective: This study aims to explore the unique challenges faced by young individuals with vision impairment (VI) during key life transitions, utilizing a community-based participatory research (CBPR) approach to identify limitations and positive aspects of existing support systems, with a focus on accessibility and diversity. Design: The study employs a qualitative CBPR design, engaging young participants with VI through online and in-person working groups over six months, prioritizing their active involvement and diverse perspectives. Methods: Twenty-one young individuals with VI from across the UK and with different VI conditions were recruited to participate in the study via a climbing and virtual reality event and stakeholders’ support. Data collection methods included open discussions, forum exchanges, and qualitative questionnaires. The data were analyzed with NVivo using inductive thematic analysis to identify key themes and patterns related to the challenges and experiences of life transitions for this diverse population. Results: The analysis revealed barriers to accessibility, such as assumptions about what a person with VI can do, inaccessibility to material, noisy environments, and insufficient training with assistive technologies. Enablers included guidance from diverse professionals and peers, multisensory approaches (beyond tactile), and peer collaborations. This study underscores the need for developing accessible and tailored strategies together with these young people to address the specific needs of this diverse population during critical life transitions (e.g., to independent living, employment and higher education). Conclusion: Engaging and co-designing effective approaches and tools with young people with VI is key to tackling the specific accessibility barriers they encounter. These approaches should be targeted at different transitional periods of their life journey, promoting diversity and inclusion.Keywords: vision impairement, life transitions, qualitative research, community-based participatory design, accessibility
Procedia PDF Downloads 51207 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 213206 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286205 Teaching Business Process Management using IBM’s INNOV8 BPM Simulation Game
Authors: Hossam Ali-Hassan, Michael Bliemel
Abstract:
This poster reflects upon our experiences using INNOV8, IBM’s Business Process Management (BPM) simulation game, in online MBA and undergraduate MIS classes over a period of 2 years. The game is designed to gives both business and information technology players a better understanding of how effective BPM impacts an entire business ecosystem. The game includes three different scenarios: Smarter Traffic, which is used to evaluate existing traffic patterns and re-route traffic based on incoming metrics; Smarter Customer Service where players develop more efficient ways to respond to customers in a call centre environment; and Smarter Supply Chains where players balance supply and demand and reduce environmental impact in a traditional supply chain model. We use the game as an experiential learning tool, where students have to act as managers making real time changes to business processes to meet changing business demands and environments. The students learn how information technology (IT) and information systems (IS) can be used to intelligently solve different problems and how computer simulations can be used to test different scenarios or models based on business decisions without having to actually make the potentially costly and/or disruptive changes to business processes. Moreover, when students play the three different scenarios, they quickly see how practical process improvements can help meet profitability, customer satisfaction and environmental goals while addressing real problems faced by municipalities and businesses today. After spending approximately two hours in the game, students reflect on their experience from it to apply several BPM principles that were presented in their textbook through the use of a structured set of assignment questions. For each final scenario students submit a screenshot of their solution followed by one paragraph explaining what criteria you were trying to optimize, and why they picked their input variables. In this poster we outline the course and the module’s learning objectives where we used the game to place this into context. We illustrate key features of the INNOV8 Simulation Game, and describe how we used them to reinforce theoretical concepts. The poster will also illustrate examples from the simulation, assignment, and learning outcomes.Keywords: experiential learning, business process management, BPM, INNOV8, simulation, game
Procedia PDF Downloads 329204 An Overview of Posterior Fossa Associated Pathologies and Segmentation
Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets
Abstract:
Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.Keywords: chiari, posterior fossa, segmentation, volumetric
Procedia PDF Downloads 107203 Multiscale Process Modeling of Ceramic Matrix Composites
Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya
Abstract:
Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.Keywords: digital engineering, finite elements, manufacturing, molecular dynamics
Procedia PDF Downloads 99202 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach
Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier
Abstract:
²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.Keywords: colloids, mining context, radium, transport
Procedia PDF Downloads 157201 Effect of Scattered Vachellia Tortilis (Umbrella Torn) and Vachellia nilotica (Gum Arabic) Trees on Selected Physio-Chemical Properties of the Soil and Yield of Sorghum (Sorghum bicolor (L.) Moench) in Ethiopia
Authors: Sisay Negash, Zebene Asfaw, Kibreselassie Daniel, Michael Zech
Abstract:
A significant portion of the Ethiopian landscape features scattered trees that are deliberately managed in crop fields to enhance soil fertility and crop yield in which the compatibility of crops with these trees varies depending on location, tree species, and annual crop type. This study aimed to examine the effects of scattered Vachellia tortilis and Vachellia nilotica trees on selected physico-chemical properties of the soil, as well as the yield and yield components of sorghum in Ethiopia. Vachellia tortilis and Vachellia nilotica were selected on abundance occurrence and managed in crop fields. A randomized complete block design was used, with a distance from the tree canopy (middle, edge, and outside) as a treatment, and five trees of each species served as replications. Sorghum was planted up to 15 meters in the east, west, south, and north directions from the tree trunk to assess growth and yield. Soil samples were collected from the two tree species, three distance factors, three soil depths(0-20cm, 20-40cm, and 40-60cm), and five replications, totaling 45 samples for each tree species. These samples were analyzed for physical and chemical properties. The results indicated that both V. tortilis and V. nilotica significantly affected soil physico-chemical properties and sorghum yield. Specifically, soil moisture content, EC, total nitrogen, organic carbon, available phosphorus and potassium, CEC, sorghum plant height, panicle length, biomass, and yield decreased with increasing distance from the canopy. Conversely, bulk density and pH increased. Under the canopy, sorghum yield increased by 66.4% and 53.5% for V. tortilis and V. nilotica, respectively, due to higher soil moisture and nutrient availability. The study recommends promoting trees in crop fields, management options for new saplings, and further research on root decomposition and nutrient supply.Keywords: canopy, crop yield, soil nutrient, soil organic matter, yield components
Procedia PDF Downloads 28200 Integrated Approach to Attenuate Insulin Amyloidosis: Synergistic Effects of Peptide and Cysteine Protease Enzymes
Authors: Shilpa Mukundaraj, Nagaraju Shivaiah
Abstract:
Amyloidogenic conditions, driven by protein aggregation into insoluble fibrils, which pose significant challenges in the clinical condition of diabetes management, particularly through the amyloidogenic LVEALYL sequence in insulin B-chain. This study explores a dual therapeutic strategy involving cysteine protease enzymes such as papain and ficin and inhibitory peptides to target insulin amyloidosis. Combining in silico, in vitro, and in vivo methodologies, the research aims to inhibit amyloid formation and degrade preformed fibrils. Inhibitory peptides were designed using structure-guided approaches in Rosetta to specifically target the LVEALYL sequence. Concurrently, cysteine protease enzymes, including papain and ficin, were evaluated for their fibril disassembly potential. In vitro experiments, utilizing SDS- PAGE and spectroscopic techniques, confirmed dose-dependent degradation 50 to 300ug in vitro and 60mg/kg in vivo of amyloid aggregates by these enzymes, with significant disaggregation observed at higher concentrations 20mg. Peptide inhibitors effectively reduced fibril formation, as evidenced by reduced Thioflavin T fluorescence and circular dichroism spectroscopy. Complementary in silico analyses, including molecular docking and dynamic simulations, provided structural insights into enzyme binding interactions with amyloidogenic regions. Key residues involved in substrate recognition and cleavage were identified, with computational findings aligning strongly with experimental data. These insights confirmed the specificity of papain and ficin in targeting insulin fibrils. For translational potential, an in vivo rat model was developed, involving subcutaneous insulin amyloid injections to induce localized amyloid deposits. Over six days of enzyme treatment, a marked reduction in amyloid burden was observed through histological findings and biochemical assay superoxide dismutase can provide insights into oxidative damage due to amyloid deposition. Furthermore, inflammatory markers IL-6, TNFα were significantly attenuated in treated groups, emphasizing the dual role of enzymes in amyloid clearance and inflammation modulation. This integrative study highlights the promise of cysteine protease enzymes and inhibitory peptides as complementary therapeutic strategies for managing insulin amyloidosis. By targeting both the formation and persistence of amyloid fibrils, this dual approach offers a novel and effective avenue for amyloidosis treatment.Keywords: insulin amyloidosis, peptide inhibitors, cysteine protease enzymes, amyloid degradation
Procedia PDF Downloads 6199 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation
Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson
Abstract:
The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.Keywords: fibrosis, transplant, renal, rejection
Procedia PDF Downloads 231198 Impact of pH Control on Peptide Profile and Antigenicity of Whey Hydrolysates
Authors: Natalia Caldeira De Carvalho, Tassia Batista Pessato, Luis Gustavo R. Fernandes, Ricardo L. Zollner, Flavia Maria Netto
Abstract:
Protein hydrolysates are ingredients of enteral diets and hypoallergenic formulas. Enzymatic hydrolysis is the most commonly used method for reducing the antigenicity of milk protein. The antigenicity and physicochemical characteristics of the protein hydrolysates depend on the reaction parameters. Among them, pH has been pointed out as of the major importance. Hydrolysis reaction in laboratory scale is commonly carried out under controlled pH (pH-stat). However, from the industrial point of view, controlling pH during hydrolysis reaction may be infeasible. This study evaluated the impact of pH control on the physicochemical properties and antigenicity of the hydrolysates of whey proteins with Alcalase. Whey protein isolate (WPI) solutions containing 3 and 7 % protein (w/v) were hydrolyzed with Alcalase 50 and 100 U g-1 protein at 60°C for 180 min. The reactions were carried out under controlled and uncontrolled pH conditions. Hydrolyses performed under controlled pH (pH-stat) were initially adjusted and maintained at pH 8.5. Hydrolyses carried out without pH control were initially adjusted to pH 8.5. Degree of hydrolysis (DH) was determined by OPA method, peptides profile was evaluated by HPLC-RP, and molecular mass distribution by SDS-PAGE/Tricine. The residual α-lactalbumin (α-La) and β-lactoglobulin (β-Lg) concentrations were determined using commercial ELISA kits. The specific IgE and IgG binding capacity of hydrolysates was evaluated by ELISA technique, using polyclonal antibodies obtained by immunization of female BALB/c mice with α-La, β-Lg and BSA. In hydrolysis under uncontrolled pH, the pH dropped from 8.5 to 7.0 during the first 15 min, remaining constant throughout the process. No significant difference was observed between the DH of the hydrolysates obtained under controlled and uncontrolled pH conditions. Although all hydrolysates showed hydrophilic character and low molecular mass peptides, hydrolysates obtained with and without pH control exhibited different chromatographic profiles. Hydrolysis under uncontrolled pH released, predominantly, peptides between 3.5 and 6.5 kDa, while hydrolysis under controlled pH released peptides smaller than 3.5 kDa. Hydrolysis with Alcalase under all conditions studied decreased by 99.9% the α-La and β-Lg concentrations in the hydrolysates detected by commercial kits. In general, β-Lg concentrations detected in the hydrolysates obtained under uncontrolled pH were significantly higher (p<0.05) than those detected in hydrolysates produced with pH control. The anti-α-La and anti-β-Lg IgE and IgG responses to all hydrolysates decreased significantly compared to WPI. Levels of specific IgE and IgG to the hydrolysates were below 25 and 12 ng ml-1, respectively. Despite the differences in peptide composition and α-La and β-Lg concentrations, no significant difference was found between IgE and IgG binding capacity of hydrolysates obtained with or without pH control. These results highlight the impact of pH on the hydrolysates characteristics and their concentrations of antigenic protein. Divergence between the antigen detection by commercial ELISA kits and specific IgE and IgG binding response was found in this study. This result shows that lower protein detection does not imply in lower protein antigenicity. Thus, the use of commercial kits for allergen contamination analysis should be cautious.Keywords: allergy, enzymatic hydrolysis, milk protein, pH conditions, physicochemical characteristics
Procedia PDF Downloads 303