Search results for: Laura Boschis
56 Mirna Expression Profile is Different in Human Amniotic Mesenchymal Stem Cells Isolated from Obese Respect to Normal Weight Women
Authors: Carmela Nardelli, Laura Iaffaldano, Valentina Capobianco, Antonietta Tafuto, Maddalena Ferrigno, Angela Capone, Giuseppe Maria Maruotti, Maddalena Raia, Rosa Di Noto, Luigi Del Vecchio, Pasquale Martinelli, Lucio Pastore, Lucia Sacchetti
Abstract:
Maternal obesity and nutrient excess in utero increase the risk of future metabolic diseases in the adult life. The mechanisms underlying this process are probably based on genetic, epigenetic alterations and changes in foetal nutrient supply. In mammals, the placenta is the main interface between foetus and mother, it regulates intrauterine development, modulates adaptive responses to sub optimal in uterus conditions and it is also an important source of human amniotic mesenchymal stem cells (hA-MSCs). We previously highlighted a specific microRNA (miRNA) profiling in amnion from obese (Ob) pregnant women, here we compared the miRNA expression profile of hA-MSCs isolated from (Ob) and control (Co) women, aimed to search for any alterations in metabolic pathways that could predispose the new-born to the obese phenotype. Methods: We isolated, at delivery, hA-MSCs from amnion of 16 Ob- and 7 Co-women with pre-pregnancy body mass index (mean/SEM) 40.3/1.8 and 22.4/1.0 kg/m2, respectively. hA-MSCs were phenotyped by flow cytometry. Globally, 384 miRNAs were evaluated by the TaqMan Array Human MicroRNA Panel v 1.0 (Applied Biosystems). By the TargetScan program we selected the target genes of the miRNAs differently expressed in Ob- vs Co-hA-MSCs; further, by KEGG database, we selected the statistical significant biological pathways. Results: The immunophenotype characterization confirmed the mesenchymal origin of the isolated hA-MSCs. A large percentage of the tested miRNAs, about 61.4% (232/378), was expressed in hA-MSCs, whereas 38.6% (146/378) was not. Most of the expressed miRNAs (89.2%, 207/232) did not differ between Ob- and Co-hA-MSCs and were not further investigated. Conversely, 4.8% of miRNAs (11/232) was higher and 6.0% (14/232) was lower in Ob- vs Co-hA-MSCs. Interestingly, 7/232 miRNAs were obesity-specific, being expressed only in hA-MSCs isolated from obese women. Bioinformatics showed that these miRNAs significantly regulated (P<0.001) genes belonging to several metabolic pathways, i.e. MAPK signalling, actin cytoskeleton, focal adhesion, axon guidance, insulin signaling, etc. Conclusions: Our preliminary data highlight an altered miRNA profile in Ob- vs Co-hA-MSCs and suggest that an epigenetic miRNA-based mechanism of gene regulation could affect pathways involved in placental growth and function, thereby potentially increasing the newborn’s risk of metabolic diseases in the adult life.Keywords: hA-MSCs, obesity, miRNA, biosystem
Procedia PDF Downloads 52755 Methods of Detoxification of Nuts With Aflatoxin B1 Contamination
Authors: Auteleyeva Laura, Maikanov Balgabai, Smagulova Ayana
Abstract:
In order to find and select detoxification methods, patent and information research was conducted, as a result of which 68 patents for inventions were found, among them from the near abroad - 14 (Russia), from far abroad: China – 27, USA - 6, South Korea–1, Germany - 2, Mexico – 4, Yugoslavia – 7, Austria, Taiwan, Belarus, Denmark, Italy, Japan, Canada for 1 security document. Aflatoxin B₁ in various nuts was determined by two methods: enzyme immunoassay "RIDASCREEN ® FAST Aflatoxin" with determination of optical density on a microplate spectrophotometer RIDA®ABSORPTION 96 with RIDASOFT® software Win.NET (Germany) and the method of high-performance liquid chromatography (HPLC Corporation Water, USA) according to GOST 307112001. For experimental contamination of nuts, the cultivation of strain A was carried out. flavus KWIK-STIK on the medium of Chapek (France) with subsequent infection of various nuts (peanuts, peanuts with shells, badam, walnuts with and without shells, pistachios).Based on our research, we have selected 2 detoxification methods: method 1 – combined (5% citric acid solution + microwave for 640 W for 3 min + UV for 20 min) and a chemical method with various leaves of plants: Artemisia terra-albae, Thymus vulgaris, Callogonum affilium, collected in the territory of Akmola region (Artemisia terra-albae, Thymus vulgaris) and Western Kazakhstan (Callogonum affilium). The first stage was the production of ethanol extracts of Artemisia terraea-albae, Thymus vulgaris, Callogonum affilium. To obtain them, 100 g of vegetable raw materials were taken, which was dissolved in 70% ethyl alcohol. Extraction was carried out for 2 hours at the boiling point of the solvent with a reverse refrigerator using an ultrasonic bath "Sapphire". The obtained extracts were evaporated on a rotary evaporator IKA RV 10. At the second stage, the three samples obtained were tested for antimicrobial and antifungal activity. Extracts of Thymus vulgaris and Callogonum affilium showed high antimicrobial and antifungal activity. Artemisia terraea-albae extract showed high antimicrobial activity and low antifungal activity. When testing method 1, it was found that in the first and third experimental groups there was a decrease in the concentration of aflatoxin B1 in walnut samples by 63 and 65%, respectively, but these values also exceeded the maximum permissible concentrations, while the nuts in the second and third experimental groups had a tart lemon flavor; When testing method 2, a decrease in the concentration of aflatoxin B1 to a safe level was observed by 91% (0.0038 mg/kg) in nuts of the 1st and 2nd experimental groups (Artemisia terra-albae, Thymus vulgaris), while in samples of the 2nd and 3rd experimental groups, a decrease in the amount of aflatoxin in 1 to a safe level was observed.Keywords: nuts, aflatoxin B1, my, mycotoxins
Procedia PDF Downloads 8654 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)
Authors: Laura Southgate
Abstract:
The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.Keywords: ASEAN, east timor, intervention, sovereignty
Procedia PDF Downloads 35753 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 9552 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 3251 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It
Authors: Laura Lee Prather
Abstract:
A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression
Procedia PDF Downloads 6850 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 31849 Application of Acoustic Emissions Related to Drought Can Elicit Antioxidant Responses and Capsaicinoids Content in Chili Pepper Plants
Authors: Laura Helena Caicedo Lopez, Luis Miguel Contreras Medina, Ramon Gerardo Guevara Gonzales, Juan E. Andrade
Abstract:
In this study, we evaluated the effect of three different hydric stress conditions: Low (LHS), medium (MHS), and high (HHS) on capsaicinoid content and enzyme regulation of C. annuum plants. Five main peaks were detected using a 2 Hz resolution vibrometer laser (Polytec-B&K). These peaks or “characteristic frequencies” were used as acoustic emissions (AEs) treatment, transforming these signals into audible sound with the frequency (Hz) content of each hydric stress. Capsaicinoids (CAPs) are the main, secondary metabolites of chili pepper plants and are known to increase during hydric stress conditions or short drought-periods. The AEs treatments were applied in two plant stages: the first one was in the pre-anthesis stage to evaluate the genes that encode the transcription of enzymes responsible for diverse metabolic activities of C. annuum plants. For example, the antioxidant responses such as peroxidase (POD), superoxide dismutase (Mn-SOD). Also, phenyl-alanine ammonia-lyase (PAL) involved in the biosynthesis of the phenylpropanoid compounds. The chalcone synthase (CHS) related to the natural defense mechanisms and species-specific aquaporin (CAPIP-1) that regulate the flow of water into and out of cells. The second stage was at 40 days after flowering (DAF) to evaluate the biochemical effect of AEs related to hydric stress on capsaicinoids production. These two experiments were conducted to identify the molecular responses of C. annuum plants to AE. Moreover, to define AEs could elicit any increase in the capsaicinoids content after a one-week exposition to AEs treatments. The results show that all AEs treatment signals (LHS, MHS, and HHS) were significantly different compared to the non-acoustic emission control (NAE). Also, the AEs induced the up-regulation of POD (~2.8, 2.9, and 3.6, respectively). The gene expression of another antioxidant response was particularly treatment-dependent. The HHS induced and overexpression of Mn-SOD (~0.23) and PAL (~0.33). As well, the MHS only induced an up-regulation of the CHs gene (~0.63). On the other hand, CAPIP-1 gene gas down-regulated by all AEs treatments LHS, MHS, and HHS ~ (-2.4, -0.43 and -6.4, respectively). Likewise, the down-regulation showed particularities depending on the treatment. LHS and MHS induced downregulation of the SOD gene ~ (-1.26 and -1.20 respectively) and PAL (-4.36 and 2.05, respectively). Correspondingly, the LHS and HHS showed the same tendency in the CHs gene, respectively ~ (-1.12 and -1.02, respectively). Regarding the elicitation effect of AE on the capsaicinoids content, additional treatment controls were included. A white noise treatment (WN) to prove the frequency-selectiveness of signals and a hydric stressed group (HS) to compare the CAPs content. Our findings suggest that WN and NAE did not present differences statically. Conversely, HS and all AEs treatments induced a significant increase of capsaicin (Cap) and dihydrocapsaicin (Dcap) after one-week of a treatment. Specifically, the HS plants showed an increase of 8.33 times compared to the NAE and WN treatments and 1.4 times higher than the MHS, which was the AEs treatment with a larger induction of Capsaicinoids among treatments (5.88) and compared to the controls.Keywords: acoustic emission, capsaicinoids, elicitors, hydric stress, plant signaling
Procedia PDF Downloads 16948 Evaluating Energy Transition of a complex of buildings in a historic site of Rome toward Zero-Emissions for a Sustainable Future
Authors: Silvia Di Turi, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Domenico Palladino
Abstract:
Recent European policies have been set ambitious targets aimed at significantly reducing CO2 emissions by 2030, with a long-term vision of transforming existing buildings into Zero-Emissions Buildings (ZEmB) by 2050. This vision represents a key point for the energy transition as the whole building stock currently accounts for 36% of total energy consumption across the Europe, mainly due to their poor energy performance. The challenge towards Zero-Emissions Buildings is particularly felt in Italy, where a significant number of buildings with historical significance or situated within protected/constrained areas can be found. Furthermore, an estimated 70% of the national building stock are built before 1976, indicating a widespread issue of poor energy performance. Addressing the energy ineƯiciency of these buildings is crucial to refining a comprehensive energy renovation approach aimed at facilitating their energy transition. In this framework the current study focuses on analysing a challenging complex of buildings to be totally restored through significant energy renovation interventions. The goal is to recover these disused buildings situated in a significant archaeological zone of Rome, contributing to the restoration and reintegration of this historically valuable site, while also oƯering insights useful for achieving zeroemission requirements for buildings within such contexts. In pursuit of meeting the stringent zero-emission requirements, a comprehensive study was carried out to assess the complex of buildings, envisioning substantial renovation measures on building envelope and plant systems and incorporating renewable energy system solutions, always respecting and preserving the historic site. An energy audit of the complex of buildings was performed to define the actual energy consumption for each energy service by adopting the hourly calculation methods. Subsequently, significant energy renovation interventions on both building envelope and mechanical systems have been examined respecting the historical value and preservation of site. These retrofit strategies have been investigated with threefold aims: 1) to recover the existing buildings ensuring the energy eƯiciency of the whole complex of buildings, 2) to explore which solutions have allowed achieving and facilitating the ZEmB status, 3) to balance the energy transition requirements with the sustainable aspect in order to preserve the historic value of the buildings and site. This study has pointed out the potentiality and the technical challenges associated with implementing renovation solutions for such buildings, representing one of the first attempt towards realizing this ambitious target for this type of building.Keywords: energy conservation and transition, complex of buildings in historic site, zero-emission buildings, energy efficiency recovery
Procedia PDF Downloads 7547 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 16346 Cotton Fabrics Functionalized with Green and Commercial Ag Nanoparticles
Authors: Laura Gonzalez, Santiago Benavides, Martha Elena Londono, Ana Elisa Casas, Adriana Restrepo-Osorio
Abstract:
Cotton products are sensitive to microorganisms due to its ability to retain moisture, which might cause change into the coloration, mechanical properties reduction or foul odor generation; consequently, this represents risks to the health of users. Nowadays, have been carried out researches to give antibacterial properties to textiles using different strategies, which included the use of silver nanoparticles (AgNPs). The antibacterial behavior can be affected by laundering process reducing its effectiveness. In the other way, the environmental impact generated for the synthetic antibacterial agents has motivated to seek new and more ecological ways for produce AgNPs. The aims of this work are to determine the antibacterial activity of cotton fabric functionalized with green (G) and commercial (C) AgNPs after twenty washing cycles, also to evaluate morphological and color changes. A plain weave cotton fabric suitable for dyeing and two AgNPs solutions were use. C a commercial product and G produced using an ecological method, both solutions with 0.5 mM concentration were impregnated on cotton fabric without stabilizer, at a liquor to fabric ratio of 1:20 in constant agitation during 30min and then dried at 70 °C by 10 min. After that the samples were subjected to twenty washing cycles using phosphate-free detergent simulated on agitated flask at 150 rpm, then were centrifuged and dried on a tumble. The samples were characterized using Kirby-Bauer test determine antibacterial activity against E. coli y S. aureus microorganisms, the results were registered by photographs establishing the inhibition halo before and after the washing cycles, the tests were conducted in triplicate. Scanning electron microscope (SEM) was used to observe the morphologies of cotton fabric and treated samples. The color changes of cotton fabrics in relation to the untreated samples were obtained by spectrophotometer analysis. The images, reveals the presence of inhibition halo in the samples treated with C and G AgNPs solutions, even after twenty washing cycles, which indicated a good antibacterial activity and washing durability, with a tendency to better results against to S. aureus bacteria. The presence of AgNPs on the surface of cotton fiber and morphological changes were observed through SEM, after and before washing cycles. The own color of the cotton fiber has been significantly altered with both antibacterial solutions. According to the colorimetric results, the samples treated with C lead to yellowing while the samples modified with G to red yellowing Cotton fabrics treated AgNPs C and G from 0.5 mM solutions exhibited excellent antimicrobial activity against E. coli and S. aureus with good laundering durability effects. The surface of the cotton fibers was modified with the presence of AgNPs C and G due to the presence of NPs and its agglomerates. There are significant changes in the natural color of cotton fabric due to deposition of AgNPs C and G which were maintained after laundering process.Keywords: antibacterial property, cotton fabric, fastness to wash, Kirby-Bauer test, silver nanoparticles
Procedia PDF Downloads 24645 Effectiveness, Safety, and Tolerability Profile of Stribild® in HIV-1-infected Patients in the Clinical Setting
Authors: Heiko Jessen, Laura Tanus, Slobodan Ruzicic
Abstract:
Objectives: The efficacy of Stribild®, an integrase strand transfer inhibitor (INSTI) -based STR, has been evaluated in randomized clinical trials and it has demonstrated durable capability in terms of achieving sustained suppression of HIV-1 RNA-levels. However, differences in monitoring frequency, existing selection bias and profile of patients enrolled in the trials, may all result in divergent efficacy of this regimen in routine clinical settings. The aim of this study was to assess the virologic outcomes, safety and tolerability profile of Stribild® in a routine clinical setting. Methods: This was a retrospective monocentric analysis on HIV-1-infected patients, who started with or were switched to Stribild®. Virological failure (VF) was defined as confirmed HIV-RNA>50 copies/ml. The minimum time of follow-up was 24 weeks. The percentage of patients remaining free of therapeutic failure was estimated using the time-to-loss-of-virologic-response (TLOVR) algorithm, by intent-to-treat analysis. Results: We analyzed the data of 197 patients (56 ART-naïve and 141 treatment-experienced patients), who fulfilled the inclusion criteria. Majority (95.9%) of patients were male. The median time of HIV-infection at baseline was 2 months in treatment-naïve and 70 months in treatment-experienced patients. Median time [IQR] under ART in treatment-experienced patients was 37 months. Among the treatment-experienced patients 27.0% had already been treated with a regimen consisting of two NRTIs and one INSTI, whereas 18.4% of them experienced a VF. The median time [IQR] of virological suppression prior to therapy with Stribild® in the treatment-experienced patients was 10 months [0-27]. At the end of follow-up (median 33 months), 87.3% (95% CI, 83.5-91.2) of treatment-naïve and 80.3% (95% CI, 75.8-84.8) of treatment-experienced patients remained free of therapeutic failure. Considering only treatment-experienced patients with baseline VL<50 copies/ml, 83.0% (95% CI, 78.5-87.5) remained free of therapeutic failure. A total of 17 patients stopped treatment with Stribild®, 5.4% (3/56) of them were treatment-naïve and 9.9% (14/141) were treatment-experienced patients. The Stribild® therapy was discontinued in 2 (1.0%) because of VF, loss to follow-up in 4 (2.0%), and drug-drug interactions in 2 (1.0%) patients. Adverse events were in 7 (3.6%) patients the reason to switch from therapy with Stribild® and further 2 (1.0%) patients decided personally to switch. The most frequently observed adverse events were gastrointestinal side effects (20.0%), headache (8%), rash events (7%) and dizziness (6%). In two patients we observed an emergence of novel resistances in integrase-gene. The N155H evolved in one patient and resulted in VF. In another patient S119R evolved either during or shortly upon switch from therapy with Stribild®. In one further patient with VF two novel mutations in the RT-gene were observed when compared to historical genotypic test result (V106I/M and M184V), whereby it is not clear whether they evolved during or already before the switch to Stribild®. Conclusions: Effectiveness of Stribild® for treatment-naïve patients was consistent with data obtained in clinical trials. The safety and tolerability profile as well as resistance development confirmed clinical efficacy of Stribild® in a daily practice setting.Keywords: ART, HIV, integrase inhibitor, stribild
Procedia PDF Downloads 28444 Enterprises and Social Impact: A Review of the Changing Landscape
Authors: Suzhou Wei, Isobel Cunningham, Laura Bradley McCauley
Abstract:
Social enterprises play a significant role in resolving social issues in the modern world. In contrast to traditional commercial businesses, their main goal is to address social concerns rather than primarily maximize profits. This phenomenon in entrepreneurship is presenting new opportunities and different operating models and resulting in modified approaches to measure success beyond traditional market share and margins. This paper explores social enterprises to clarify their roles and approaches in addressing grand challenges related to social issues. In doing so, it analyses the key differences between traditional business and social enterprises, such as their operating model and value proposition, to understand their contributions to society. The research presented in this paper responds to calls for research to better understand social enterprises and entrepreneurship but also to explore the dynamics between profit-driven and socially-oriented entities to deliver mutual benefits. This paper, which examines the features of commercial business, suggests their primary focus is profit generation, economic growth and innovation. Beyond the chase of profit, it highlights the critical role of innovation typical of successful businesses. This, in turn, promotes economic growth, creates job opportunities and makes a major positive impact on people's lives. In contrast, the motivations upon which social enterprises are founded relate to a commitment to address social problems rather than maximizing profits. These entities combine entrepreneurial principles with commitments to deliver social impact and grand challenge changes, creating a distinctive category within the broader enterprise and entrepreneurship landscape. The motivations for establishing a social enterprise are diverse, such as encompassing personal fulfillment, a genuine desire to contribute to society and a focus on achieving impactful accomplishments. The paper also discusses the collaboration between commercial businesses and social enterprises, which is viewed as a strategic approach to addressing grand challenges more comprehensively and effectively. Finally, this paper highlights the evolving and diverse expectations placed on all businesses to actively contribute to society beyond profit-making. We conclude that there is an unrealized and underdeveloped potential for collaboration between commercial businesses and social enterprises to produce greater and long-lasting social impacts. Overall, the aim of this research is to encourage more investigation of the complex relationship between economic and social objectives and contributions through a better understanding of how and why businesses might address social issues. Ultimately, the paper positions itself as a tool for understanding the evolving landscape of business engagement with social issues and advocates for collaborative efforts to achieve sustainable and impactful outcomes.Keywords: business, social enterprises, collaboration, social issues, motivations
Procedia PDF Downloads 5043 Fiberoptic Intubation Skills Training Improves Emergency Medicine Resident Comfort Using Modality
Authors: Nicholus M. Warstadt, Andres D. Mallipudi, Oluwadamilola Idowu, Joshua Rodriguez, Madison M. Hunt, Soma Pathak, Laura P. Weber
Abstract:
Endotracheal intubation is a core procedure performed by emergency physicians. This procedure is a high risk, and failure results in substantial morbidity and mortality. Fiberoptic intubation (FOI) is the standard of care in difficult airway protocols, yet no widespread practice exists for training emergency medicine (EM) residents in the technical acquisition of FOI skills. Simulation on mannequins is commonly utilized to teach advanced airway techniques. As part of a program to introduce FOI into our ED, residents received hands-on training in FOI as part of our weekly resident education conference. We hypothesized that prior to the hands-on training, residents had little experience with FOI and were uncomfortable with using fiberoptic as a modality. We further hypothesized that resident comfort with FOI would increase following the training. The education intervention consisted of two hours of focused airway teaching and skills acquisition for PGY 1-4 residents. One hour was dedicated to four case-based learning stations focusing on standard, pediatric, facial trauma, and burn airways. Direct, video, and fiberoptic airway equipment were available to use at the residents’ discretion to intubate mannequins at each station. The second hour involved direct instructor supervision and immediate feedback during deliberate practice for FOI of a mannequin. Prior to the hands-on training, a pre-survey was sent via email to all EM residents at NYU Grossman School of Medicine. The pre-survey asked how many FOI residents have performed in the ED, OR, and on a mannequin. The pre-survey and a post-survey asked residents to rate their comfort with FOI on a 5-point Likert scale ("extremely uncomfortable", "somewhat uncomfortable", "neither comfortable nor uncomfortable", "somewhat comfortable", and "extremely comfortable"). The post-survey was administered on site immediately following the training. A two-sample chi-square test of independence was calculated comparing self-reported resident comfort on the pre- and post-survey (α ≤ 0.05). Thirty-six of a total of 70 residents (51.4%) completed the pre-survey. Of pre-survey respondents, 34 residents (94.4%) had performed 0, 1 resident (2.8%) had performed 1, and 1 resident (2.8%) had performed 2 FOI in the ED. Twenty-five residents (69.4%) had performed 0, 6 residents (16.7%) had performed 1, 2 residents (5.6%) had performed 2, 1 resident (2.8%) had performed 3, and 2 residents (5.6%) had performed 4 FOI in the OR. Seven residents (19.4%) had performed 0, and 16 residents (44.4%) had performed 5 or greater FOI on a mannequin. 29 residents (41.4%) attended the hands-on training, and 27 out of 29 residents (93.1%) completed the post-survey. Self-reported resident comfort with FOI significantly increased in post-survey compared to pre-survey questionnaire responses (p = 0.00034). Twenty-one of 27 residents (77.8%) report being “somewhat comfortable” or “extremely comfortable” with FOI on the post-survey, compared to 9 of 35 residents (25.8%) on the pre-survey. We show that dedicated FOI training is associated with increased learner comfort with such techniques. Further direction includes studying technical competency, skill retention, translation to direct patient care, and optimal frequency and methodology of future FOI education.Keywords: airway, emergency medicine, fiberoptic intubation, medical simulation, skill acquisition
Procedia PDF Downloads 18042 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 7241 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene
Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald
Abstract:
In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.Keywords: water and sanitation, emergency response, education and training, building resilience
Procedia PDF Downloads 30540 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes
Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand
Abstract:
Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing
Procedia PDF Downloads 6339 Measuring Organizational Resiliency for Flood Response in Thailand
Authors: Sudha Arlikatti, Laura Siebeneck, Simon A. Andrew
Abstract:
The objective of this research is to measure organizational resiliency through five attributes namely, rapidity, redundancy, resourcefulness, and robustness and to provide recommendations for resiliency building in flood risk communities. The research was conducted in Thailand following the severe floods of 2011 triggered by Tropical Storm Nock-ten. The floods lasted over eight months starting in June 2011 affecting 65 of the country’s 76 provinces and over 12 million people. Funding from a US National Science Foundation grant was used to collect ephemeral data in rural (Ayutthaya), suburban (Pathum Thani), and urban (Bangkok) provinces of Thailand. Semi-structured face-to-face interviews were conducted in Thai with 44 contacts from public, private, and non-profit organizations including universities, schools, automobile companies, vendors, tourist agencies, monks from temples, faith based organizations, and government agencies. Multiple triangulations were used to analyze the data by identifying selective themes from the qualitative data, validated with quantitative data and news media reports. This helped to obtain a more comprehensive view of how organizations in different geographic settings varied in their understanding of what enhanced or hindered their resilience and consequently their speed and capacities to respond. The findings suggest that the urban province of Bangkok scored highest in resourcefulness, rapidity of response, robustness, and ability to rebound. This is not surprising considering that it is the country’s capital and the seat of government, economic, military and tourism sectors. However, contrary to expectations all 44 respondents noted that the rural province of Ayutthaya was the fastest to recover amongst the three. Its organizations scored high on redundancy and rapidity of response due to the strength of social networks, a flood disaster sub-culture due to annual flooding, and the help provided by monks from and faith based organizations. Organizations in the suburban community of Pathum Thani scored lowest on rapidity of response and resourcefulness due to limited and ambiguous warnings, lack of prior flood experience and controversies that government flood protection works like sandbagging favored the capital city of Bangkok over them. Such a micro-level examination of organizational resilience in rural, suburban and urban areas in a country through mixed methods studies has its merits in getting a nuanced understanding of the importance of disaster subcultures and religious norms for resilience. This can help refocus attention on the strengths of social networks and social capital, for flood mitigation.Keywords: disaster subculture, flood response, organizational resilience, Thailand floods, religious beliefs and response, social capital and disasters
Procedia PDF Downloads 15938 Quantitative Analysis Of Traffic Dynamics And Violation Patterns Triggered By Cruise Ship Tourism In Victoria, British Columbia
Authors: Muhammad Qasim, Laura Minet
Abstract:
Victoria (BC), Canada, is a major cruise ship destination, attracting over 600,000 tourists annually. Residents of the James Bay neighborhood, home to the Ogden Point cruise terminal, have expressed concerns about the impacts of cruise ship activity on local traffic, air pollution, and safety compliance. This study evaluates the effects of cruise ship-induced traffic in James Bay, focusing on traffic flow intensification, density surges, changes in traffic mix, and speeding violations. To achieve these objectives, traffic data was collected in James Bay during two key periods: May, before the peak cruise season, and August, during full cruise operations. Three Miovision cameras captured the vehicular traffic mix at strategic entry points, while nine traffic counters monitored traffic distribution and speeding violations across the network. Traffic data indicated an average volume of 308 vehicles per hour during peak cruise times in May, compared to 116 vehicles per hour when no ships were in port. Preliminary analyses revealed a significant intensification of traffic flow during cruise ship "hoteling hours," with a volume increase of approximately 10% per cruise ship arrival. A notable 86% surge in taxi presence was observed on days with three cruise ships in port, indicating a substantial shift in traffic composition, particularly near the cruise terminal. The number of tourist buses escalated from zero in May to 32 in August, significantly altering traffic dynamics within the neighborhood. The period between 8 pm and 11 pm saw the most significant increases in traffic volume, especially when three ships were docked. Higher vehicle volumes were associated with a rise in speed violations, although this pattern was inconsistent across all areas. Speeding violations were more frequent on roads with lower traffic density, while roads with higher traffic density experienced fewer violations, due to reduced opportunities for speeding in congested conditions. PTV VISUM software was utilized for fuzzy distribution analysis and to visualize traffic distribution across the study area, including an assessment of the Level of Service on major roads during periods before and during the cruise ship season. This analysis identified the areas most affected by cruise ship-induced traffic, providing a detailed understanding of the impact on specific parts of the transportation network. These findings underscore the significant influence of cruise ship activity on traffic dynamics in Victoria, BC, particularly during peak periods when multiple ships are in port. The study highlights the need for targeted traffic management strategies to mitigate the adverse effects of increased traffic flow, changes in traffic mix, and speed violations, thereby enhancing road safety in the James Bay neighborhood. Further research will focus on detailed emissions estimation to fully understand the environmental impacts of cruise ship activity in Victoria.Keywords: cruise ship tourism, air quality, traffic violations, transport dynamics, pollution
Procedia PDF Downloads 2237 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home
Authors: Laura M. F. Bertens
Abstract:
The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace
Procedia PDF Downloads 15836 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys
Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit
Abstract:
Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction
Procedia PDF Downloads 28435 Experimental and Simulation Results for the Removal of H2S from Biogas by Means of Sodium Hydroxide in Structured Packed Columns
Authors: Hamadi Cherif, Christophe Coquelet, Paolo Stringari, Denis Clodic, Laura Pellegrini, Stefania Moioli, Stefano Langè
Abstract:
Biogas is a promising technology which can be used as a vehicle fuel, for heat and electricity production, or injected in the national gas grid. It is storable, transportable, not intermittent and substitutable for fossil fuels. This gas produced from the wastewater treatment by degradation of organic matter under anaerobic conditions is mainly composed of methane and carbon dioxide. To be used as a renewable fuel, biogas, whose energy comes only from methane, must be purified from carbon dioxide and other impurities such as water vapor, siloxanes and hydrogen sulfide. Purification of biogas for this application particularly requires the removal of hydrogen sulfide, which negatively affects the operation and viability of equipment especially pumps, heat exchangers and pipes, causing their corrosion. Several methods are available to eliminate hydrogen sulfide from biogas. Herein, reactive absorption in structured packed column by means of chemical absorption in aqueous sodium hydroxide solutions is considered. This study is based on simulations using Aspen Plus™ V8.0, and comparisons are done with data from an industrial pilot plant treating 85 Nm3/h of biogas which contains about 30 ppm of hydrogen sulfide. The rate-based model approach has been used for simulations in order to determine the efficiencies of separation for different operating conditions. To describe vapor-liquid equilibrium, a γ/ϕ approach has been considered: the Electrolyte NRTL model has been adopted to represent non-idealities in the liquid phase, while the Redlich-Kwong equation of state has been used for the vapor phase. In order to validate the thermodynamic model, Henry’s law constants of each compound in water have been verified against experimental data. Default values available in Aspen Plus™ V8.0 for the properties of pure components properties as heat capacity, density, viscosity and surface tension have also been verified. The obtained results for physical and chemical properties are in a good agreement with experimental data. Reactions involved in the process have been studied rigorously. Equilibrium constants for equilibrium reactions and the reaction rate constant for the kinetically controlled reaction between carbon dioxide and the hydroxide ion have been checked. Results of simulations of the pilot plant purification section show the influence of low temperatures, concentration of sodium hydroxide and hydrodynamic parameters on the selective absorption of hydrogen sulfide. These results show an acceptable degree of accuracy when compared with the experimental data obtained from the pilot plant. Results show also the great efficiency of sodium hydroxide for the removal of hydrogen sulfide. The content of this compound in the gas leaving the column is under 1 ppm.Keywords: biogas, hydrogen sulfide, reactive absorption, sodium hydroxide, structured packed column
Procedia PDF Downloads 35334 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling
Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis
Abstract:
The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals
Procedia PDF Downloads 13433 Rhizobium leguminosarum: Selecting Strain and Exploring Delivery Systems for White Clover
Authors: Laura Villamizar, David Wright, Claudia Baena, Marie Foxwell, Maureen O'Callaghan
Abstract:
Leguminous crops can be self-sufficient for their nitrogen requirements when their roots are nodulated with an effective Rhizobium strain and for this reason seed or soil inoculation is practiced worldwide to ensure nodulation and nitrogen fixation in grain and forage legumes. The most widely used method of applying commercially available inoculants is using peat cultures which are coated onto seeds prior to sowing. In general, rhizobia survive well in peat, but some species die rapidly after inoculation onto seeds. The development of improved formulation methodology is essential to achieve extended persistence of rhizobia on seeds, and improved efficacy. Formulations could be solid or liquid. Most popular solid formulations or delivery systems are: wettable powders (WP), water dispersible granules (WG), and granules (DG). Liquid formulation generally are: suspension concentrates (SC) or emulsifiable concentrates (EC). In New Zealand, R. leguminosarum bv. trifolii strain TA1 has been used as a commercial inoculant for white clover over wide areas for many years. Seeds inoculation is carried out by mixing the seeds with inoculated peat, some adherents and lime, but rhizobial populations on stored seeds decline over several weeks due to a number of factors including desiccation and antibacterial compounds produced by the seeds. In order to develop a more stable and suitable delivery system to incorporate rhizobia in pastures, two strains of R. leguminosarum (TA1 and CC275e) and several formulations and processes were explored (peat granules, self-sticky peat for seed coating, emulsions and a powder containing spray dried microcapsules). Emulsions prepared with fresh broth of strain TA1 were very unstable under storage and after seed inoculation. Formulations where inoculated peat was used as the active ingredient were significantly more stable than those prepared with fresh broth. The strain CC275e was more tolerant to stress conditions generated during formulation and seed storage. Peat granules and peat inoculated seeds using strain CC275e maintained an acceptable loading of 108 CFU/g of granules or 105 CFU/g of seeds respectively, during six months of storage at room temperature. Strain CC275e inoculated on peat was also microencapsulated with a natural biopolymer by spray drying and after optimizing operational conditions, microparticles containing 107 CFU/g and a mean particle size between 10 and 30 micrometers were obtained. Survival of rhizobia during storage of the microcapsules is being assessed. The development of a stable product depends on selecting an active ingredient (microorganism), robust enough to tolerate some adverse conditions generated during formulation, storage, and commercialization and after its use in the field. However, the design and development of an adequate formulation, using compatible ingredients, optimization of the formulation process and selecting the appropriate delivery system, is possibly the best tool to overcome the poor survival of rhizobia and provide farmers with better quality inoculants to use.Keywords: formulation, Rhizobium leguminosarum, storage stability, white clover
Procedia PDF Downloads 14932 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case
Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht
Abstract:
Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL
Procedia PDF Downloads 8231 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method
Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola
Abstract:
The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization
Procedia PDF Downloads 38830 A Practical Methodology for Evaluating Water, Sanitation and Hygiene Education and Training Programs
Authors: Brittany E. Coff, Tommy K. K. Ngai, Laura A. S. MacDonald
Abstract:
Many organizations in the Water, Sanitation and Hygiene (WASH) sector provide education and training in order to increase the effectiveness of their WASH interventions. A key challenge for these organizations is measuring how well their education and training activities contribute to WASH improvements. It is crucial for implementers to understand the returns of their education and training activities so that they can improve and make better progress toward the desired outcomes. This paper presents information on CAWST’s development and piloting of the evaluation methodology. The Centre for Affordable Water and Sanitation Technology (CAWST) has developed a methodology for evaluating education and training activities, so that organizations can understand the effectiveness of their WASH activities and improve accordingly. CAWST developed this methodology through a series of research partnerships, followed by staged field pilots in Nepal, Peru, Ethiopia and Haiti. During the research partnerships, CAWST collaborated with universities in the UK and Canada to: review a range of available evaluation frameworks, investigate existing practices for evaluating education activities, and develop a draft methodology for evaluating education programs. The draft methodology was then piloted in three separate studies to evaluate CAWST’s, and CAWST’s partner’s, WASH education programs. Each of the pilot studies evaluated education programs in different locations, with different objectives, and at different times within the project cycles. The evaluations in Nepal and Peru were conducted in 2013 and investigated the outcomes and impacts of CAWST’s WASH education services in those countries over the past 5-10 years. In 2014, the methodology was applied to complete a rigorous evaluation of a 3-day WASH Awareness training program in Ethiopia, one year after the training had occurred. In 2015, the methodology was applied in Haiti to complete a rapid assessment of a Community Health Promotion program, which informed the development of an improved training program. After each pilot evaluation, the methodology was reviewed and improvements were made. A key concept within the methodology is that in order for training activities to lead to improved WASH practices at the community level, it is not enough for participants to acquire new knowledge and skills; they must also apply the new skills and influence the behavior of others following the training. The steps of the methodology include: development of a Theory of Change for the education program, application of the Kirkpatrick model to develop indicators, development of data collection tools, data collection, data analysis and interpretation, and use of the findings for improvement. The methodology was applied in different ways for each pilot and was found to be practical to apply and adapt to meet the needs of each case. It was useful in gathering specific information on the outcomes of the education and training activities, and in developing recommendations for program improvement. Based on the results of the pilot studies, CAWST is developing a set of support materials to enable other WASH implementers to apply the methodology. By using this methodology, more WASH organizations will be able to understand the outcomes and impacts of their training activities, leading to higher quality education programs and improved WASH outcomes.Keywords: education and training, capacity building, evaluation, water and sanitation
Procedia PDF Downloads 30929 Fake News Domination and Threats on Democratic Systems
Authors: Laura Irimies, Cosmin Irimies
Abstract:
The public space all over the world is currently confronted with the aggressive assault of fake news that have lately impacted public agenda setting, collective decisions and social attitudes. Top leaders constantly call out most mainstream news as “fake news” and the public opinion get more confused. "Fake news" are generally defined as false, often sensational, information disseminated under the guise of news reporting and has been declared the word of the year 2017 by Collins Dictionary and it also has been one of the most debated socio-political topics of recent years. Websites which, deliberately or not, publish misleading information are often shared on social media where they essentially increase their reach and influence. According to international reports, the exposure to fake news is an undeniable reality all over the world as the exposure to completely invented information goes up to the 31 percent in the US, and it is even bigger in Eastern Europe countries, such as Hungary (42%) and Romania (38%) or in Mediterranean countries, such as Greece (44%) or Turkey (49%), and lower in Northern and Western Europe countries – Germany (9%), Denmark (9%) or Holland (10%). While the study of fake news (mechanism and effects) is still in its infancy, it has become truly relevant as the phenomenon seems to have a growing impact on democratic systems. Studies conducted by the European Commission show that 83% of respondents out of a total of 26,576 interviewees consider the existence of news that misrepresent reality as a threat for democracy. Studies recently conducted at Arizona State University show that people with higher education can more easily spot fake headlines, but over 30 percent of them can still be trapped by fake information. If we were to refer only to some of the most recent situations in Romania, fake news issues and hidden agenda suspicions related to the massive and extremely violent public demonstrations held on August 10th, 2018 with a strong participation of the Romanian diaspora have been massively reflected by the international media and generated serious debates within the European Commission. Considering the above framework, the study raises four main research questions: 1. Is fake news a problem or just a natural consequence of mainstream media decline and the abundance of sources of information? 2. What are the implications for democracy? 3. Can fake news be controlled without restricting fundamental human rights? 4. How could the public be properly educated to detect fake news? The research uses mostly qualitative but also quantitative methods, content analysis of studies, websites and media content, official reports and interviews. The study will prove the real threat fake news represent and also the need for proper media literacy education and will draw basic guidelines for developing a new and essential skill: that of detecting fake in news in a society overwhelmed by sources of information that constantly roll massive amounts of information increasing the risk of misinformation and leading to inadequate public decisions that could affect democratic stability.Keywords: agenda setting democracy, fake news, journalism, media literacy
Procedia PDF Downloads 12828 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms
Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan
Abstract:
Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity
Procedia PDF Downloads 25527 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon
Authors: Laura M. F. Bertens
Abstract:
There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone
Procedia PDF Downloads 126