Search results for: Marian Dorcas Quain David Appiah-Kubi
119 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study
Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon
Abstract:
The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical OrganizationKeywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy
Procedia PDF Downloads 92118 The Effects of Myelin Basic Protein Charge Isomers on the Methyl Cycle Metabolites in Glial Cells
Authors: Elene Zhuravliova, Tamar Barbakadze, Irina Kalandadze, Elnari Zaalishvili, Lali Shanshiashvili, David Mikeladze
Abstract:
Background: Multiple sclerosis (MS) is an inflammatory, neurodegenerative disease, which is accompanied by demyelination and autoimmune response to myelin proteins. Among post-translational modifications, which mediate the modulation of inflammatory pathways during MS, methylation is the main one. The methylation of DNA, also amino acids lysine and arginine, occurs in the cell. It was found that decreased trans-methylation is associated with neuroinflammatory diseases. Therefore, abnormal regulation of the methyl cycle could induce demyelination through the action on PAD (peptidyl-arginine-deiminase) gene promoter. PAD takes part in protein citrullination and targets myelin basic protein (MBP), which is affected during demyelination. To determine whether MBP charge isomers are changing the methyl cycle, we have estimated the concentrations of methyl cycle metabolites in MBP-activated primary astrocytes and oligodendrocytes. For this purpose, the action of the citrullinated MBP- C8 and the most cationic MBP-C1 isomers on the primary cells were investigated. Methods: Primary oligodendrocyte and astrocyte cell cultures were prepared from whole brains of 2-day-old Wistar rats. The methyl cycle metabolites, including homocysteine, S-adenosylmethionine (SAM), and S-adenosylhomocysteine (SAH), were estimated by HPLC analysis using fluorescence detection and prior derivatization. Results: We found that the action of MBP-C8 and MBP-C1 induces a decrease in the concentration of both methyl cycle metabolites, S-adenosylmethionine (SAM) and S-adenosylhomocysteine (SAH), in astrocytes compared to the control cells. As for oligodendrocytes, the concentration of SAM was increased by the addition of MBP-C1, while MBP-C8 has no significant effect. As for SAH, its concentration was increased compared to the control cells by the action of both MBP-C1 and MBP-C8. A significant increase in homocysteine concentration was observed by the action of the MBP-C8 isomer in both oligodendrocytes and astrocytes. Conclusion: These data suggest that MBP charge isomers change the concentration of methyl cycle metabolites. MBP-C8 citrullinated isomer causes elevation of homocysteine in astrocytes and oligodendrocytes, which may be the reason for decreased astrocyte proliferation and increased oligodendrocyte cell death which takes place in neurodegenerative processes. Elevated homocysteine levels and subsequent abnormal regulation of methyl cycles in oligodendrocytes possibly change the methylation of DNA that activates PAD gene promoter and induces the synthesis of PAD, which in turn provokes the process of citrullination, which is the accompanying process of demyelination. Acknowledgment: This research was supported by the SRNSF Georgia RF17_534 grant.Keywords: myelin basic protein, astrocytes, methyl cycle metabolites, homocysteine, oligodendrocytes
Procedia PDF Downloads 156117 Effect of 8-OH-DPAT on the Behavioral Indicators of Stress and on the Number of Astrocytes after Exposure to Chronic Stress
Authors: Ivette Gonzalez-Rivera, Diana B. Paz-Trejo, Oscar Galicia-Castillo, David N. Velazquez-Martinez, Hugo Sanchez-Castillo
Abstract:
Prolonged exposure to stress can cause disorders related with dysfunction in the prefrontal cortex such as generalized anxiety and depression. These disorders involve alterations in neurotransmitter systems; the serotonergic system—a target of the drugs that are commonly used as a treatment to these disorders—is one of them. Recent studies suggest that 5-HT1A receptors play a pivotal role in the serotonergic system regulation and in stress responses. In the same way, there is increasing evidence that astrocytes are involved in the pathophysiology of stress. The aim of this study was to examine the effects of 8-OH-DPAT, a selective agonist of 5-HT1A receptors, in the behavioral signs of anxiety and anhedonia as well as in the number of astrocytes in the medial prefrontal cortex (mPFC) after exposure to chronic stress. They used 50 male Wistar rats of 250-350 grams housed in standard laboratory conditions and treated in accordance with the ethical standards of use and care of laboratory animals. A protocol of chronic unpredictable stress was used for 10 consecutive days during which the presentation of stressors such as motion restriction, water deprivation, wet bed, among others, were used. 40 rats were subjected to the stress protocol and then were divided into 4 groups of 10 rats each, which were administered 8-OH-DPAT (Tocris, USA) intraperitoneally with saline as vehicle in doses 0.0, 0.3, 1.0 and 2.0 mg/kg respectively. Another 10 rats were not subjected to the stress protocol or the drug. Subsequently, all the rats were measured in an open field test, a forced swimming test, sucrose consume, and a cero maze test. At the end of this procedure, the animals were sacrificed, the brain was removed and the tissue of the mPFC (Bregma: 4.20, 3.70, 2.70, 2.20) was processed in immunofluorescence staining for astrocytes (Anti-GFAP antibody - astrocyte maker, ABCAM). Statistically significant differences were found in the behavioral tests of all groups, showing that the stress group with saline administration had more indicators of anxiety and anhedonia than the control group and the groups with administration of 8-OH-DPAT. Also, a dose dependent effect of 8-OH-DPAT was found on the number of astrocytes in the mPFC. The results show that 8-OH-DPAT can modulate the effect of stress in both behavioral and anatomical level. Also they indicate that 5-HT1A receptors and astrocytes play an important role in the stress response and may modulate the therapeutic effect of serotonergic drugs, so they should be explored as a fundamental part in the treatment of symptoms of stress and in the understanding of the mechanisms of stress responses.Keywords: anxiety, prefrontal cortex, serotonergic system, stress
Procedia PDF Downloads 325116 Through the Robot’s Eyes: A Comparison of Robot-Piloted, Virtual Reality, and Computer Based Exposure for Fear of Injections
Authors: Bonnie Clough, Tamara Ownsworth, Vladimir Estivill-Castro, Matt Stainer, Rene Hexel, Andrew Bulmer, Wendy Moyle, Allison Waters, David Neumann, Jayke Bennett
Abstract:
The success of global vaccination programs is reliant on the uptake of vaccines to achieve herd immunity. Yet, many individuals do not obtain vaccines or venipuncture procedures when needed. Whilst health education may be effective for those individuals who are hesitant due to safety or efficacy concerns, for many of these individuals, the primary concern relates to blood or injection fear or phobia (BII). BII is highly prevalent and associated with a range of negative health impacts, both at individual and population levels. Exposure therapy is an efficacious treatment for specific phobias, including BII, but has high patient dropout and low implementation by therapists. Whilst virtual reality approaches exposure therapy may be more acceptable, they have similarly low rates of implementation by therapists and are often difficult to tailor to an individual client’s needs. It was proposed that a piloted robot may be able to adequately facilitate fear induction and be an acceptable approach to exposure therapy. The current study examined fear induction responses, acceptability, and feasibility of a piloted robot for BII exposure. A Nao humanoid robot was programmed to connect with a virtual reality head-mounted display, enabling live streaming and exploration of real environments from a distance. Thirty adult participants with BII fear were randomly assigned to robot-pilot or virtual reality exposure conditions in a laboratory-based fear exposure task. All participants also completed a computer-based two-dimensional exposure task, with an order of conditions counterbalanced across participants. Measures included fear (heart rate variability, galvanic skin response, stress indices, and subjective units of distress), engagement with a feared stimulus (eye gaze: time to first fixation and a total number of fixations), acceptability, and perceived treatment credibility. Preliminary results indicate that fear responses can be adequately induced via a robot-piloted platform. Further results will be discussed, as will implications for the treatment of BII phobia and other fears. It is anticipated that piloted robots may provide a useful platform for facilitating exposure therapy, being more acceptable than in-vivo exposure and more flexible than virtual reality exposure.Keywords: anxiety, digital mental health, exposure therapy, phobia, robot, virtual reality
Procedia PDF Downloads 77115 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study
Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz
Abstract:
Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres
Procedia PDF Downloads 175114 Antimicrobial Activity of 2-Nitro-1-Propanol and Lauric Acid against Gram-Positive Bacteria
Authors: Robin Anderson, Elizabeth Latham, David Nisbet
Abstract:
Propagation and dissemination of antimicrobial resistant and pathogenic microbes from spoiled silages and composts represents a serious public health threat to humans and animals. In the present study, the antimicrobial activity of the short chain nitro-compound, 2-nitro-1-propanol (9 mM) as well as the medium chain fatty acid, lauric acid, and its glycerol monoester, monolaurin, (each at 25 and 17 µmol/mL, respectfully) were investigated against select pathogenic and multi-drug resistant antimicrobial resistant Gram-positive bacteria common to spoiled silages and composts. In an initial study, we found that growth rates of a multi-resistant Enterococcus faecalis (expressing resistance against erythromycin, quinupristin/dalfopristin and tetracycline) and Staphylococcus aureus strain 12600 (expressing resistance against erythromycin, linezolid, penicillin, quinupristin/dalfopristin and vancomycin) were more than 78% slower (P < 0.05) by 2-nitro-1-propanol treatment during culture (n = 3/treatment) in anaerobically prepared ½ strength Brain Heart Infusion broth at 37oC when compared to untreated controls (0.332 ± 0.04 and 0.108 ± 0.03 h-1, respectively). The growth rate of 2-nitro-1-propanol-treated Listeria monocytogenes was also decreased by 96% (P < 0.05) when compared to untreated controls cultured similarly (0.171 ± 0.01 h-1). Maximum optical densities measured at 600 nm were lower (P < 0.05) in 2-nitro-1-propanol-treated cultures (0.053 ± 0.01, 0.205 ± 0.02 and 0.041 ± 0.01, respectively) than in untreated controls (0.483 ± 0.02, 0.523 ± 0.01 and 0.427 ± 0.01, respectively) for E. faecalis, S. aureus and L. monocytogenes, respectively. When tested against mixed microbial populations during anaerobic 24 h incubation of spoiled silage, significant effects of treatment with 1 mg 2-nitro-1-propanol (approximately 9.5 µmol/g) or 5 mg lauric acid/g (approximately 25 µmol/g) on populations of wildtype Enterococcus and Listeria were not observed. Mixed populations treated with 5 mg monolaurin/g (approximately 17 µmol/g) had lower (P < 0.05) viable cell counts of wildtype enterococci than untreated controls after 6 h incubation (2.87 ± 1.03 versus 5.20 ± 0.25 log10 colony forming units/g, respectively) but otherwise significant effects of monolaurin were not observed. These results reveal differential susceptibility of multi-drug resistant enterococci and staphylococci as well as L. monocytogenes to the inhibitory activity of 2-nitro-1-propanol and the medium chain fatty acid, lauric acid and its glycerol monoester, monolaurin. Ultimately, these results may lead to improved treatment technologies to preserve the microbiological safety of silages and composts.Keywords: 2-nitro-1-propanol, lauric acid, monolaurin, gram positive bacteria
Procedia PDF Downloads 108113 Using Repetition of Instructions in Course Design to Improve Instructor Efficiency and Increase Enrollment in a Large Online Course
Authors: David M. Gilstrap
Abstract:
Designing effective instructions is a critical dimension of effective teaching systems. Due to a void in interpersonal contact, online courses present new challenges in this regard, especially with large class sizes. This presentation is a case study in how the repetition of instructions within the course design was utilized to increase instructor efficiency in managing a rapid rise in enrollment. World of Turf is a two-credit, semester-long elective course for non-turfgrass majors at Michigan State University. It is taught entirely online and solely by the instructor without any graduate teaching assistants. Discussion forums about subject matter are designated for each lecture, and those forums are moderated by a few undergraduate turfgrass majors. The instructions as to the course structure, navigation, and grading are conveyed in the syllabus and course-introduction lecture. Regardless, students email questions about such matters, and the number of emails increased as course enrollment grew steadily during the first three years of its existence, almost to a point that the course was becoming unmanageable. Many of these emails occurred because the instructor was failing to update and operate the course in a timely and proper fashion because he was too busy answering emails. Some of the emails did help the instructor ferret out poorly composed instructions, which he corrected. Beginning in the summer semester of 2015, the instructor overhauled the course by segregating content into weekly modules. The philosophy envisioned and embraced was that there can never be too much repetition of instructions in an online course. Instructions were duplicated within each of these modules as well as associated modules for syllabus and schedules, getting started, frequently asked questions, practice tests, surveys, and exams. In addition, informational forums were created and set aside for questions about the course workings and each of the three exams, thus creating even more repetition. Within these informational forums, students typically answer each other’s questions, which demonstrated to the students that that information is available in the course. When needed, the instructor interjects with corrects answers or clarifies any misinformation which students might be putting forth. Increasing the amount of repetition of instructions and strategic enhancements to the course design have resulted in a dramatic decrease in the number of email replies necessitated by the instructor. The resulting improvement in efficiency allowed the instructor to raise enrollment limits thus effecting a ten-fold increase in enrollment over a five-year period with 1050 students registered during the most recent academic year, thus becoming easily the largest online course at the university. Because of the improvement in course-delivery efficiency, sufficient time was created that allowed the instructor to development and launch an additional online course, hence further enhancing his productivity and value in terms of the number of the student-credit hours for which he is responsible.Keywords: design, efficiency, instructions, online, repetition
Procedia PDF Downloads 209112 The Association between Prior Antibiotic Use and Subsequent Risk of Infectious Disease: A Systematic Review
Authors: Umer Malik, David Armstrong, Mark Ashworth, Alex Dregan, Veline L'Esperance, Lucy McDonnell, Mariam Molokhia, Patrick White
Abstract:
Introduction: The microbiota lining epithelial surfaces is thought to play an important role in many human physiological functions including defense against pathogens and modulation of immune response. The microbiota is susceptible to disruption from external influences such as exposure to antibiotic medication. It is thought that antibiotic-induced disruption of the microbiota could predispose to pathogen overgrowth and invasion. We hypothesized that antibiotic use would be associated with increased risk of future infections. We carried out a systematic review of evidence of associations between antibiotic use and subsequent risk of community-acquired infections. Methods: We conducted a review of the literature for observational studies assessing the association between antibiotic use and subsequent community-acquired infection. Eligible studies were published before April 29th, 2016. We searched MEDLINE, EMBASE, and Web of Science and screened titles and abstracts using a predefined search strategy. Infections caused by Clostridium difficile, drug-resistant organisms and fungal organisms were excluded as their association with prior antibiotic use has been examined in previous systematic reviews. Results: Eighteen out of 21,518 retrieved studies met the inclusion criteria. The association between past antibiotic exposure and subsequent increased risk of infection was reported in 16 studies, including one study on Campylobacter jejuni infection (Odds Ratio [OR] 3.3), two on typhoid fever (ORs 5.7 and 12.2), one on Staphylococcus aureus skin infection (OR 2.9), one on invasive pneumococcal disease (OR 1.57), one on recurrent furunculosis (OR 16.6), one on recurrent boils and abscesses (Risk ratio 1.4), one on upper respiratory tract infection (OR 2.3) and urinary tract infection (OR 1.1), one on invasive Haemophilus influenzae type b (Hib) infection (OR 1.51), one on infectious mastitis (OR 5.38), one on meningitis (OR 2.04) and five on Salmonella enteric infection (ORs 1.4, 1.59, 1.9, 2.3 and 3.8). The effect size in three studies on Salmonella enteric infection was of marginal statistical significance. A further two studies on Salmonella infection did not demonstrate a statistically significant association between prior antibiotic exposure and subsequent infection. Conclusion: We have found an association between past antibiotic exposure and subsequent risk of a diverse range of infections in the community setting. Our findings provide evidence to support the hypothesis that prior antibiotic usage may predispose to future infection risk, possibly through antibiotic-induced alteration of the microbiota. The findings add further weight to calls to minimize inappropriate antibiotic prescriptions.Keywords: antibiotic, infection, risk factor, side effect
Procedia PDF Downloads 224111 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use
Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner
Abstract:
The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning
Procedia PDF Downloads 361110 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples
Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási
Abstract:
In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.Keywords: elements, environmental and food samples, ICP-MS, interference effects
Procedia PDF Downloads 504109 Creating Standards to Define the Role of Employment Specialists: A Case Study
Authors: Joseph Ippolito, David Megenhardt
Abstract:
In the United States, displaced workers, the unemployed and those seeking to build additional work skills are provided employment training and job placement services through a system of One-Stop Career Centers that are sponsored by the country’s 593 local Workforce Boards. During the period 2010-2015, these centers served roughly 8 million individuals each year. The quality of services provided at these centers rests upon professional employment specialists who work closely with clients to identify their job interests, to connect them to appropriate training opportunities, to match them with needed supportive social services and to guide them to eventual employment. Despite the crucial role these Employment Specialists play, currently there are no broadly accepted standards that establish what these individuals are expected to do in the workplace, nor are there indicators to assess how well an individual performs these responsibilities. Education Development Center (EDC) and the United Labor Agency (ULA) have partnered to create a foundation upon which curriculum can be developed that addresses the skills, knowledge and behaviors that Employment Specialists must master in order to serve their clients effectively. EDC is a non-profit, education research and development organization that designs, implements, and evaluates programs to improve education, health and economic opportunity worldwide. ULA is the social action arm of organized labor in Greater Cleveland, Ohio. ULA currently operates One-Stop Career Centers in both Cleveland and Pittsburgh, Pennsylvania. This case study outlines efforts taken to create standards that define the work of Employment Specialists and to establish indicators that can guide assessment of work performance. The methodology involved in the study has engaged a panel of expert Employment Specialists in rigorous, structured dialogues that analyze and identify the characteristics that enable them to be effective in their jobs. It has also drawn upon and integrated reviews of the panel’s work by more than 100 other Employment Specialists across the country. The results of this process are two documents that provide resources for developing training curriculum for future Employment Specialists, namely: an occupational profile of an Employment Specialist that offers a detailed articulation of the skills, knowledge and behaviors that enable individuals to be successful at this job, and; a collection of performance based indicators, aligned to the profile, which illustrate what the work responsibilities of an Employment Specialist 'look like' a four levels of effectiveness ranging from novice to expert. The method of occupational analysis used by the study has application across a broad number of fields.Keywords: assessment, employability, job standards, workforce development
Procedia PDF Downloads 234108 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory
Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra
Abstract:
Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale
Procedia PDF Downloads 140107 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 14106 Learning from Long COVID: How Healthcare Needs to Change for Contested Illnesses
Authors: David Tennison
Abstract:
In the wake of the Covid-19 pandemic, a new chronic illness emerged onto the global stage: Long Covid. Long Covid presents with several symptoms commonly seen in other poorly-understood illnesses, such as fibromyalgia (FM) and myalgic encephalomyelitis/ chronic fatigue syndrome (ME/CFS). However, while Long Covid has swiftly become a recognised illness, FM and ME/CFS are still seen as contested, which impacts patient care and healthcare experiences. This study aims to examine what the differences are between Long Covid and FM; and if the Long Covid case can provide guidance for how to address the healthcare challenge of contested illnesses. To address this question, this study performed comprehensive research into the history of FM; our current biomedical understanding of it; and available healthcare interventions (within the context of the UK NHS). Analysis was undertaken of the stigma and stereotypes around FM, and a comparison made between FM and the emerging Long Covid literature, along with the healthcare response to Long Covid. This study finds that healthcare for chronic contested illnesses in the UK is vastly insufficient - in terms of pharmaceutical and holistic interventions, and the provision of secondary care options. Interestingly, for Long Covid, many of the treatment suggestions are pulled directly from those used for contested illnesses. The key difference is in terms of funding and momentum – Long Covid has generated exponentially more interest and research in a short time than there has been in the last few decades of contested illness research. This stands to help people with FM and ME/CFS – for example, research has recently been funded into “brain fog”, a previously elusive and misunderstood symptom. FM is culturally regarded as a “women’s disease” and FM stigma stems from notions of “hysteria”. A key finding is that the idea of FM affecting women disproportionally is not reflected in modern population studies. Emerging data on Long Covid also suggests a slight leaning towards more female patients, however it is less feminised, potentially due to it emerging in the global historical moment of the pandemic. Another key difference is that FM is rated as an extremely low-prestige illness by healthcare professionals, while it was in large part due to the advocacy of affected healthcare professionals that Long Covid was so quickly recognised by science and medicine. In conclusion, Long Covid (and the risk of future pandemics and post-viral illnesses) highlight a crucial need for implementing new, and reinforcing existing, care networks for chronic illnesses. The difference in how contested illnesses like FM, and new ones like Long Covid are treated have a lot to do with the historical moment in which they emerge – but cultural stereotypes, from within and without medicine, need updating. Particularly as they contribute to disease stigma that causes genuine harm to patients. However, widespread understanding and acceptance of Long Covid could help fight contested illness stigma, and the attention, funding and research into Long Covid may actually help raise the profile of contested illnesses and uncover answers about their symptomatology.Keywords: long COVID, fibromyalgia, myalgic encephalomyelitis, chronic fatigue syndrome, NHS, healthcare, contested illnesses, chronic illnesses, COVID-19 pandemic
Procedia PDF Downloads 68105 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins
Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava
Abstract:
The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles
Procedia PDF Downloads 155104 Insect Manure (Frass) as a Complementary Fertilizer to Enhance Soil Mineralization Function: Application to Cranberry and Field Crops
Authors: Joël Passicousset, David Gilbert, Chloé Chervier-Legourd, Emmanuel Caron-Garant, Didier Labarre
Abstract:
Living soil agriculture tries to reconciliate food production while improving soil health, soil biodiversity, soil fertility and more generally attenuating the inherent environmental drawbacks induced by modern agriculture. Using appropriate organic materials as soil amendments has a role to play in the aim of increasing the soil organic matter, improving soil fertility, sequestering carbon, and diminishing the dependence on both mineral fertilizer and pesticides. Insect farming consists in producing insects that can be used as a rich-in-protein and entomo-based food. Usually, detritivores are chosen, thus they can be fed with food wastes, which contributes to circular economy while producing low-carbon food. This process also produces frass, made of insect feces, exuvial material, and non-digested fibrous material, that have valuable fertilizer and biostimulation properties. But frass, used as a sole fertilizer on a crop may be not completely adequate for plants’ needs. This is why this project considers black soldier fly (termed BSF, one of the three main insect species grown commercially) frass as a complementary fertilizer, both in organic and in conventional contexts. Three kinds of experiments are made to understand the behaviour of fertilizer treatments based on frass incorporation. Lab-scale mineralization experiments suggest that BSF frass alone mineralizes more slowly than chicken manure alone (CM), but at a ratio of 90% CM-10% BSF frass, the mineralization rate of the mixture is higher than both frass and CM individually. For example, in the 7 days following the fertilization with same nitrogen amount introduced among treatments, around 80% of the nitrogen content supplied through 90% CM-10% BSF frass fertilization is present in the soil under mineral forms, compared to roughly 60% for commercial CM fertilization and 45% with BSF-frass. This suggests that BSF frass contains a more recalcitrant form of organic nitrogen than CM, but also that BSF frass has a highly active microbiota that can increase CM mineralization rate. Consequently, when progressive mineralization is needed, pure BSF-frass may be a consistent option from an agronomic aspect whereas, for specific crops that require spikes of readily available nitrogen sources (like cranberry), fast release 90CM-10BSF frass biofertilizer are more appropriate. Field experiments on cranberry suggests that, indeed, 90CM-10BSF frass is a potent candidate for organic cranberry production, as currently, organic growers rely solely on CM, whose mineralization kinetics are known to imperfectly match plant’s needs, which is known to be a major reason that sustains the current yield gap between conventional and organic cranberry sectors.Keywords: soil mineralization, biofertilizer, BSF-frass, chicken manure, soil functions, nitrogen, soil microbiota
Procedia PDF Downloads 70103 Comparison of the Yumul Faces Anxiety Scale to the Categorization Scale, the Numerical Verbal Rating Scale, and the State-Trait Anxiety Inventory for Preoperative Anxiety Evaluation
Authors: Ofelia Loani Elvir Lazo, Roya Yumul, David Chernobylsky, Omar Durra
Abstract:
Background: It is crucial to detect the patient’s existing anxiety to assist patients in a perioperative setting which is to be caused by the fear associated with surgical and anesthetic complications. However, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting, given the duration and concentration required to complete the 40-item questionnaire. Our primary aim in the study is to investigate the correlation of the Yumul Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparative analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusıons: Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul Faces Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul Faces Anxiety Scale are merited.Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale
Procedia PDF Downloads 117102 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance
Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow
Abstract:
The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation
Procedia PDF Downloads 289101 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 94100 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12699 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 22498 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 9697 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 8296 Violent, Psychological, Sexual and Abuse-Related Emergency Department Usage amongst Pediatric Victims of Physical Assault and Gun Violence: A Case-Control Study
Authors: Mary Elizabeth Bernardin, Margie Batek, Joseph Moen, David Schnadower
Abstract:
Background: Injuries due to interpersonal violence are a common reason for emergency department (ED) visits amongst the American pediatric population. Gun violence, in particular, is associated with high morbidity, mortality as well as financial costs. Patterns of pediatric ED usage may be an indicator of risk for future violence, but very little data on the topic exists. Objective: The aims of this study were to assess for frequencies of ED usage for previous interpersonal violence, mental/behavioral issues, sexual/reproductive issues and concerns for abuse in youths presenting to EDs due to physical assault injuries (PAIs) compared to firearm injuries (FIs). Methods: In this retrospective case-control study, ED charts of children ages 8-19 years who presented with injuries due to interpersonal violent encounters from 2014-2017 were reviewed. Data was collected regarding all previous ED visits for injuries due to interpersonal violence (including physical assaults and firearm injuries), mental/behavioral health visits (including depression, suicidal ideation, suicide attempt, homicidal ideation and violent behavior), sexual/reproductive health visits (including sexually transmitted infections and pregnancy related issues), and concerns for abuse (including physical abuse or domestic violence, neglect, sexual abuse, sexual assault, and intimate partner violence). Logistic regression was used to identify predictors of gun violence based on previous ED visits amongst physical assault injured versus firearm injured youths. Results: A total of 407 patients presenting to the ED for an interpersonal violent encounter were analyzed, 251 (62%) of which were due to physical assault injuries (PAIs) and 156 (38%) due to firearm injuries (FIs). The majority of both PAI and FI patients had no previous history of ED visits for violence, mental/behavioral health, sexual/reproductive health or concern for abuse (60.8% PAI, 76.3% FI). 19.2% of PAI and 13.5% of FI youths had previous ED visits for physical assault injuries (OR 0.68, P=0.24, 95% CI 0.36 to 1.29). 1.6% of PAI and 3.2% of FI youths had a history of ED visits for previous firearm injuries (OR 3.6, P=0.34, 95% CI 0.04 to 2.95). 10% of PAI and 3.8% of FI youths had previous ED visits for mental/behavioral health issues (OR 0.91, P=0.80, 95% CI 0.43 to 1.93). 10% of PAI and 2.6% of FI youths had previous ED visits due to concerns for abuse (OR 0.76, P=0.55, 95% CI 0.31 to 1.86). Conclusions: There are no statistically significant differences between physical assault-injured and firearm-injured youths in terms of ED usage for previous violent injuries, mental/behavioral health visits, sexual/reproductive health visits or concerns for abuse. However, violently injured youths in this study have more than twice the number of previous ED usage for physical assaults and mental health visits than previous literature indicates. Data comparing ED usage of victims of interpersonal violence to nonviolent ED patients is needed, but this study supports the notion that EDs may be a useful place for identification of and enrollment in interventions for youths most at risk for future violence.Keywords: child abuse, emergency department usage, pediatric gun violence, pediatric interpersonal violence, pediatric mental health, pediatric reproductive health
Procedia PDF Downloads 23595 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning
Authors: Ioanna Taouki, Marie Lallier, David Soto
Abstract:
Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition
Procedia PDF Downloads 15094 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 19793 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 20592 The Confluence between Autism Spectrum Disorder and the Schizoid Personality
Authors: Murray David Schane
Abstract:
Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions
Procedia PDF Downloads 11491 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index
Authors: Alejandro Cittadino, David Allende
Abstract:
Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.Keywords: landfill, leachate, surface water, water quality index
Procedia PDF Downloads 15190 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 204